Presented at
DEF CON 32 (2024),
Aug. 9, 2024, 10 a.m.
(20 minutes).
In recent years, CCTV footage has been integrated in systems to observe areas and detect traversing malicious actors (e.g., criminals, terrorists). However, this footage has "blind spots", areas where objects are detected with lower confidence due to their angle/distance from the camera.
In this talk, we investigate a novel side effect of object detection in CCTV footage; location-based confidence weakness.
We demonstrate that a pedestrian's position (distance, angle, height) in footage impacts an object detector's confidence.
We analyze this phenomenon in four lighting conditions (lab, morning, afternoon, night) using five object detectors (YOLOv3, Faster R-CNN, SSD, DiffusionDet, RTMDet).
We then demonstrate this in footage of pedestrian traffic from three locations (Broadway, Shibuya Crossing, Castro Street), showing they contain "blind spots" where pedestrians are detected with low confidence. This persists across various locations, object detectors, and times of day. A malicious actor could take advantage of this to avoid detection.
We propose TipToe, a novel evasion attack leveraging "blind spots" to construct a minimum confidence path between two points in a CCTV-recorded area.
We demonstrate its performance on footage of Broadway, Shibuya Crossing, and Castro Street, observed by YOLOv3, Faster R-CNN, SSD, DiffusionDet, and RTMDet.
TipToe reduces max/average confidence by 0.10 and 0.16, respectively, on paths in Shibuya Crossing observed by YOLOv3, with similar performance for other locations and object detectors.
1. Artificial intelligence in medicine: A comprehensive survey of medical doctor’s perspectives in Portugal [link](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10484446/), (Accessed 09-10-2023).
2. The impact of artificial intelligence along the insurance value chain and on the insurability of risks - The Geneva Papers on Risk and Insurance - Issues and Practice [link](https://link.springer.com/article/10.1057/s41288-020-00201-7#citeas), (Accessed 09-10-2023).
3. R. Chopra and G. D. Sharma, “Application of artificial intelligence in stock market forecasting: A critique, review, and research agenda,” Journal of Risk and Financial Management, vol. 14, no. 11, 2021.[link](https://www.mdpi.com/1911-8074/14/11/526)
4. [B. B. Elallid, N. Benamar, A. S. Hafid, T. Rachidi, and N. Mrani, “A comprehensive survey on the application of deep and reinforcement learning approaches in autonomous driving,” Journal of King Saud University - Computer and Information Sciences, vol. 34, no. 9, pp. 7366–7390, 2022. (Online). Available: [link](https://www.sciencedirect.com/science/article/pii/S1319157822000970)
5. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” 2014.
6. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” 2015.
7. A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” 2017.
8. A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “Adversarial attacks and defences: A survey,” 2018.
9. A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” 2018.
10. M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’16. New York, NY, USA: Association for Computing Machinery, 2016, p. 1528–1540. (Online). Available: [link](https://doi.org/10.1145/2976749.2978392)
11. Z. Zhou, D. Tang, X. Wang, W. Han, X. Liu, and K. Zhang, “Invisible mask: Practical attacks on face recognition with infrared,” 2018.
12. S. Komkov and A. Petiushko, “AdvHat: Real-world adversarial attack on ArcFace face ID system,” in 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, jan 2021. (Online). Available: [link](https://doi.org/10.1109%2Ficpr48806.2021.9412236)
13. B. Yin, W. Wang, T. Yao, J. Guo, Z. Kong, S. Ding, J. Li, and C. Liu, “Adv-makeup: A new imperceptible and transferable attack on face recognition,” in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Z.- H. Zhou, Ed. International Joint Conferences on Artificial Intelligence Organization, 8 2021, pp. 1252–1258, main Track. (Online). Available: [link](https://doi.org/10.24963/ijcai.2021/173)
14. A. Zolfi, S. Avidan, Y. Elovici, and A. Shabtai, “Adversarial mask: Real-world universal adversarial attack on face recognition model,” 2022.
15. C. Sitawarin, A. N. Bhagoji, A. Mosenia, M. Chiang, and P. Mittal, “Darts: Deceiving autonomous cars with toxic signs,” 2018.
16. Y. Zhao, H. Zhu, R. Liang, Q. Shen, S. Zhang, and K. Chen, “Seeing isn’t believing: Towards more robust adversarial attack against real world object detectors,”Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019. (Online). Available: [link](https://api.semanticscholar.org/CorpusID:207947087)
17. G. Lovisotto, H. Turner, I. Sluganovic, M. Strohmeier, and I. Martinovic, “SLAP: Improving physical adversarial examples with Short-Lived adversarial perturbations,” in 30th USENIX Security Symposium (USENIX Security 21). USENIX Association, Aug. 2021, pp. 1865–1882. (Online). Available: [link](https://www.usenix.org/conference/usenixsecurity21/presentation/lovisotto)
18. T. Sato, J. Shen, N. Wang, Y. Jia, X. Lin, and Q. A. Chen, “Dirty road can attack: Security of deep learning based automated lane centering under Physical-World attack,” in 30th USENIX Security Symposium (USENIX Security 21). USENIX Association, Aug. 2021, pp. 3309–3326. (Online). Available: [link](https://www.usenix.org/conference/usenixsecurity21/presentation/sato)
19. W. Wang, Y. Yao, X. Liu, X. Li, P. Hao, and T. Zhu, “I can see the light: Attacks on autonomous vehicles using invisible lights,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’21. New York, NY, USA: Association for Computing Machinery, 2021, p. 1930–1944. (Online). Available: [link](https://doi.org/10.1145/3460120.3484766)
20. S.-T. Chen, C. Cornelius, J. Martin, and D. H. Chau, “ShapeShifter: Robust physical adversarial attack on faster r-CNN object detector,” in Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2019, pp. 52–68. (Online). Available: [link](https://doi.org/10.1007%2F978-3-030-10925-7_4)
21. K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physical-world attacks on deep learning models,” 2018.
22. S. Thys, W. V. Ranst, and T. Goedemé, “Fooling automated surveillance cameras: adversarial patches to attack person detection,” 2019.
23. Z. Wu, S.-N. Lim, L. Davis, and T. Goldstein, “Making an invisibility cloak: Real world adversarial attacks on object detectors,” 2020.
24. R. M. Oza, A. Geisen, and T. Wang, “Traffic sign detection and recognition using deep learning,” in 2021 4th International Conference on Artificial Intelligence for Industries (AI4I), 2021, pp. 16–20.
Presenters:
-
Jacob Shams
- Ph.D. Researcher at Cyber@Ben-Gurion University
Jacob Shams is a Ph.D. student at Ben-Gurion University of the Negev (BGU). His work addresses the security of AI models and systems, model extraction attacks, deep neural network (DNN) watermarking, and robustness of computer vision (CV) models.
Jacob is a Ph.D. researcher at Cyber@Ben-Gurion University (CBG) and is working on multiple research projects in the area of AI security. Jacob holds a B.Sc. in Software Engineering from BGU and an M.Sc. in Software and Information Systems Engineering from BGU.
Similar Presentations: