Presented at
Black Hat USA 2022,
Aug. 10, 2022, 10:20 a.m.
(40 minutes).
Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new family of machine learning (ML) models, have been proposed to fully leverage graph data to build powerful applications. In particular, the inductive GNNs, which can generalize to unseen data, become mainstream in this direction. Those models have facilitated numerous practical solutions to real world problems, such as node classification, community detection link prediction/recommendation, binary similarity detection, malware detection, fraud detection, bot detection, etc.
To train a good model, a large amount of proprietary data as well as computational resources are needed, leading to valuable intellectual property. Previous research has shown that ML models are prone to adversarial attacks, which aim to steal the functionality of the target models. However, most of them focus on the models trained with non-structured data (such as images and texts). On the other hand, little attention has been paid to the security of models trained with graph data, i.e., GNNs, and, more interestingly, the privacy of the raw data used to train GNNs.
In this talk, we outline three novel attacks against GNNs, namely model stealing attack, link re-identification attack, and property inference attack. We first show that the attackers, disguised as benign customers of your commercially deployed GNN models, can leverage our model stealing attack to steal GNNs with high accuracy and high fidelity. We then demonstrate that the attackers can infer private and sensitive relationships contained in the raw data you used to train the GNNs. We finally reveal a novel graph reconstruction attack that can reconstruct a graph that has similar graph structural statistics to the target graph. Note that certain graph data is often expensive to obtain and proprietary (e.g., biomedical/molecular graph collected from lab study). Such graph reconstruction attacks may pose a direct threat to pharmaceutical companies leveraging GNNs to accelerate drug discovery.
Presenters:
-
Yang Zhang
- Professor, CISPA Helmholtz Center for Information Security
Yang Zhang is a faculty member at CISPA Helmholtz Center for Information Security. Previously, he was a research group leader at CISPA. From January 2017 to December 2018, he was a postdoc with Michael Backes. Prior to that, he obtained his PhD degree from the University of Luxembourg on November 2016 under the supervision of Sjouke Mauw and Jun Pang. He obtained his bachelor's (2009) and master's (2012) degrees from Shandong University, China.
-
Yun Shen
- Technical Director, Spot by NetApp
Dr. Yun Shen's current research interests focus on applying data-driven approaches to better understand malicious activity on the Internet. Through the collection and analysis of large-scale datasets, he developed novel and robust mitigation techniques to make the Internet a safer place. His research involves a mix of quantitative analysis, machine learning, and systems design. He has authored a number of papers in international journals and conferences, and has 30+ granted US patents.
-
Azzedine Benameur
- GM of Security Products, Spot by NetApp
Dr. Azzedine Benameur, an experienced researcher in Security & Privacy with a strong industrial focus, is currently the GM of Security Products with Spot by NetApp in Washington D.C. He previously led the Cybersecurity research group and mobile security Research and Development at Kryptowire. With over 10 years of experience working on Security, Privacy, Cloud Security and Mobile, he has a proven track record in delivering industrial focused research with prototypes and patents while pushing the state of the art with academic publications. In his past role at Symantec, he was in charge of enhancing the detection of rooted devices and pushed a novel patented solution in both enterprise and consumer versions of Norton used by millions. He also focused on Cloud security and low level binary security issues through DARPA and IARPA funded projects (MEERKATS and MINESTRONE). Prior to Symantec, he was a Researcher in the Cloud and Security Lab of HP Labs Bristol, UK where he worked on privacy as part of the European Union's EnCoRe project, investigating fine-grained consent and revocation in user-centric applications. Prior to this, he worked on SERENITY, another European Union security research project, at the Security & Trust Lab of SAP Research.
Links:
Similar Presentations: