I'm Not a Human: Breaking the Google reCAPTCHA

Presented at Black Hat Asia 2016, Unknown date/time (Unknown duration).

Since their inception, captchas have been widely used for preventing fraudsters from performing illicit actions. Nevertheless, economic incentives have resulted in an arms race, where fraudsters develop automated solvers and, in turn, captcha services tweak their design to break the solvers. Recent work, however, presented a generic attack that can be applied to any text-based captcha scheme. Fittingly, Google recently unveiled the latest version of reCaptcha. The goal of their new system is twofold; to minimize the effort for legitimate users, while requiring tasks that are more challenging to computers than text recognition. ReCaptcha is driven by an "advanced risk analysis system" that evaluates requests and selects the difficulty of the captcha that will be returned. Users may be required to click in a checkbox, or identify images with similar content.

In this paper, we conduct a comprehensive study of reCaptcha, and explore how the risk analysis is influenced by each aspect of the request. Through extensive experimentation, we identify flaws that allow adversaries to effortlessly influence the risk analysis process, bypass restrictions, and deploy large-scale attacks. Subsequently, we design a novel low-cost attack that leverages deep learning technologies for the semantic annotation of images. Our system is extremely effective, automatically solving 70.78% of the image reCaptcha challenges, while requiring only 19 seconds per challenge. We also apply our attack to the Facebook image captcha and achieve an accuracy of 83.5%. Based on our experimental findings, we propose a series of safeguards and modifications for impacting the scalability and accuracy of our attacks. Overall, while our study focuses on reCaptcha, our findings have wide implications; as the semantic information conveyed via images is increasingly within the realm of automated reasoning, the future of captchas relies on the exploration of novel directions.


Presenters:

  • Iasonas Polakis - Columbia University
    Iasonas (Jason) Polakis is a Postdoctoral Research Scientist in the Department of Computer Science at Columbia University. His research interests lie in exploring the security limitations of online social networks, developing practical attacks and exploring their privacy implications, and designing robust countermeasures. His recent work includes exploring alternative authentication mechanisms, fine-grained access control for shared content, and designing attacks against next-generation CAPTCHA systems.
  • Suphannee Sivakorn - Columbia University
    Suphannee Sivakorn is a PhD student in the Computer Science Department of Columbia University. Her interests lie in the security and privacy aspects of social networks and web security.

Links:

Similar Presentations: