| 研究生: |
吳啓榮 Wu, Chi-Jung |
|---|---|
| 論文名稱: |
如何不被後門攻擊? 圖後門攻擊的資料存取控制分析與防禦策略 Safeguarding against Graph Backdoor Attacks via Data Access Control Analysis and A Simple Defense Strategy |
| 指導教授: |
李政德
Li, Cheng-Te 張欣民 Chang, Hsing-Ming |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 數據科學研究所 Institute of Data Science |
| 論文出版年: | 2024 |
| 畢業學年度: | 112 |
| 語文別: | 英文 |
| 論文頁數: | 40 |
| 中文關鍵詞: | 深度學習 、圖神經網路 、圖後門攻擊 |
| 外文關鍵詞: | Deep Learning, Graph Neural Networks, Graph Backdoor Attack |
| 相關次數: | 點閱:114 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
當前圖神經網路在處理圖結構數據時展示了其強大的能力。然而,隨著這一技術在各行各業中的應用,人們越來越重視模型在實際使用中的安全性與穩健性。本研究關注於圖神經網路在節點分類任務中面臨的圖後門攻擊問題。圖後門攻擊是一種對抗式攻擊,其中攻擊者在訓練數據和測試數據中都嵌入了觸發器,這種攻擊僅在觸發器作用下才會生效,從而操縱模型的預測結果。現有的圖後門攻擊研究主要集中於攻擊方法的變化與提升,力求增強攻擊效果並隱藏攻擊痕跡,但這些研究往往基於過於理想化的情境,使攻擊者獲取了超出合理範圍的訓練數據以最佳化攻擊目標,這與實際的圖結構數據情境不符。因此,本研究首先探討了在現實應用情境下,攻擊者在有限成本下遇到的各種限制對攻擊效果的影響,以提供攻擊方未來方法研究時應注重的發展方向。同時,這些洞見也可幫助防守方制定相應的安全強化措施。此外,本研究提出了一種新穎且簡單的圖後門防禦方法 BAMINANT。該方法利用圖後門攻擊的固有特性進行檢測與防禦,通過將對鄰居節點分類結果具有針對性和操控性的節點識別為惡意後門節點,從而有效地防禦現有所有圖後門攻擊模型。BAMINANT方法原理簡單直觀,但在防禦效果和泛用性方面均表現出色。
While Graph Neural Networks (GNNs) have demonstrated remarkable capabilities for graph-structured data, their application across various industries has highlighted the need not only for effective models but also for ensuring their security and robustness. This study investigates the vulnerability of GNNs to Graph Backdoor Attacks (GBAs) in node classification tasks. GBAs are a type of adversarial attack that requires embedding triggers in both training and testing datasets. The backdoor behavior implanted by the attacker activates only when the trigger is encountered, allowing the attacker to manipulate the model's predictions. Existing research on GBAs primarily focuses on advancing attack methods to enhance effectiveness and obscure traces. However, we argue that previous scenarios are overly idealized, allowing attackers access to unrealistically comprehensive training data, which does not reflect the real-world context of graph-structured data. To address this, we first explore the impact of various realistic constraints on attackers operating within limited budgets. This provides insights into the critical aspects attackers should focus on in future research, while defenders can use these insights to strengthen security measures. We then propose a novel and straightforward defense method, BAMINANT, which leverages the inherent characteristics of GBAs for detection and defense. BAMINANT identifies and removes malicious backdoor nodes by targeting nodes that exhibit control over neighboring nodes' classification results. Despite its simplicity, this method effectively defends against all existing GBAs, demonstrating both effectiveness and versatility.
[1] Enyan Dai, Minhua Lin, Xiang Zhang, and Suhang Wang. Unnoticeable backdoor attacks on graph neural networks. In Proceedings of the ACM Web Conference 2023, pages 2263–2273, 2023.
[2] Enyan Dai, Tianxiang Zhao, Huaisheng Zhu, Junjie Xu, Zhimeng Guo, Hui Liu, Jiliang Tang, and Suhang Wang. A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability. arXiv preprint arXiv:2204.08570, 2022.
[3] Kaize Ding, Jundong Li, Rohit Bhanushali, and Huan Liu. Deep anomaly detection on attributed networks. In Proceedings of the 2019 SIAM international conference on data mining, pages 594–602. SIAM, 2019.
[4] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
[5] Mingxuan Ju, Yujie Fan, Chuxu Zhang, and Yanfang Ye. Let graph be the go board: gradient-free node injection attack for graph neural networks via reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 4383–4390, 2023.
[6] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
[7] Benedek Rozemberczki, Carl Allen, and Rik Sarkar. Multi-scale attributed node embedding. Journal of Complex Networks, 9(2):cnab014, 2021.
[8] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
[9] Kaiyang Wang, Huaxin Deng, Yijia Xu, Zhonglin Liu, and Yong Fang. Multi-target label backdoor attacks on graph neural networks. Pattern Recognition, 152:110449, 2024.
[10] Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. Graph backdoor. In 30th USENIX Security Symposium (USENIX Security 21), pages 1523–1540, 2021.
[11] Jing Xu, Gorka Abad, and Stjepan Picek. Rethinking the trigger-injecting position in graph backdoor attack. In 2023 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2023.
[12] Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, pages 40–48. PMLR, 2016.
[13] Dingqiang Yuan, Xiaohua Xu, Lei Yu, Tongchang Han, Rongchang Li, and Meng Han. E-sage: Explainability-based defense against backdoor attacks on graph neural networks. arXiv preprint arXiv:2406.10655, 2024.
[14] Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. Backdoor attacks to graph neural networks. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies, pages 15–26, 2021.
[15] Zhiwei Zhang, Minhua Lin, Enyan Dai, and Suhang Wang. Rethinking graph backdoor attacks: A distribution-preserving perspective. arXiv preprint arXiv:2405.10757, 2024.
[16] Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2847–2856, 2018.
校內:2029-08-26公開