| 研究生: |
邱俊維 Chiu, Jun-Wei |
|---|---|
| 論文名稱: |
學習抵禦對抗攻擊與類別稀疏之穩健圖神經網路 Learning Robust Graph Neural Networks against Adversarial Attacks and Label Scarcity |
| 指導教授: |
李政德
Li, Cheng-Te |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 數據科學研究所 Institute of Data Science |
| 論文出版年: | 2022 |
| 畢業學年度: | 110 |
| 語文別: | 英文 |
| 論文頁數: | 75 |
| 中文關鍵詞: | 圖神經網路 、對抗攻擊 、雜訊標籤 、異質圖 、標籤稀疏性 |
| 外文關鍵詞: | Graph neural networks, Adversarial attacks, Label noise, Heterophilous graph, Label scarcity |
| 相關次數: | 點閱:176 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來,圖神經網路在圖的節點分類任務上取得了優越的成績,同時也引起了極大的關注。儘管圖神經網路已經被成功地應用在各種不同的領域中,如社群網路、推薦系統與引用網路,然而在某些特定場景下的圖可能帶有雜訊,導致模型的預測準確率不佳。
在本研究中,我們探討了各種圖上的雜訊問題,例如對抗攻擊、雜訊標籤和異質圖,同時這些圖上的雜訊可能會伴隨著標籤稀疏性。為了解決上述的問題,我們提出了一個新穎的模型框架,稱為Holistic Robust Graph Neural Networks, HRGNN,其創造了帶有標籤的合成節點,接著將合成節點連結到圖上原有的節點,成為原有節點的新鄰居。透過這些合成節點,HRGNN 利用圖神經網路的聚合機制將可靠的資訊注入原有的節點,以淨化原有節點被污染的特徵表示,從而緩解因圖上雜訊所帶來的負面效應。此外,HRGNN 的邊過濾策略有助於移除來自於攻擊者的雜訊邊,防止錯誤的訊息傳播,而偽標籤則是提供了更多的標籤資訊以解決標籤稀疏和雜訊標籤的問題。
據我們所知,HRGNN 是首個在圖上使用合成節點來作為原有節點的良性鄰居以提高模型穩健性的模型。從實驗結果來看,HRGNN 在多個真實世界的資料集上都優於當前最先進的防禦模型,藉此證明了 HRGNN 具有能夠處理各種不同類型雜訊的穩健性。
Graph neural networks (GNNs) have recently attracted great attention by achieving remarkable performance on the classification task with graph-structured data. Despite graph neural networks have been applied in various fields, such as social networks, recommender systems, and citation networks successfully, there are various noises in graphs in some specific scenarios to cause poor prediction accuracy. In this work, we investigate various noise problems such as adversarial attacks, label noise, and heterophilous graphs, and all of them might be accompanied by label scarcity. To address the problems above, We proposed a novel framework, Holistic Robust Graph Neural Networks (HRGNN), that creates some synthetic nodes with labels that will link with existing nodes as their new neighbors. With synthetic nodes, HRGNN can inject the reliable information into existing nodes due to the aggregation mechanism of GNNs to purify the poisoned representations of the existing nodes and alleviate the negative effects caused by various noises. Furthermore, edge filtering in HRGNN helps remove the noisy edges caused by the attacker to prevent the propagation of incorrect information while pseudo labeling provides more label information for defending against label scarcity and label noise. To the best of our knowledge, we are the first to introduce the synthetic nodes into graph and build benign neighbors for each existing node to improve the robustness of model. From empirical experiments, HRGNN outperforms the state-of-the-art defense baselines on several real-world datasets and it demonstrates the robustness of HRGNN to handle the graphs with different types of noises.
[1] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolu- tional networks. arXiv preprint arXiv:1609.02907, 2016.
[2] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
[3] HongweiWang,FuzhengZhang,MengdiZhang,JureLeskovec,MiaoZhao,WenjieLi, and Zhongyuan Wang. Knowledge-aware graph neural networks with label smooth- ness regularization for recommender systems. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 968–977, 2019.
[4] Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. Adver- sarial attack on graph structured data. In International conference on machine learning, pages 1115–1124. PMLR, 2018.
[5] Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2847–2856, 2018.
[6] Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural net- works via meta learning. arXiv preprint arXiv:1902.08412, 2019.
[7] WeiJin,YaxingLi,HanXu,YiqiWang,ShuiwangJi,CharuAggarwal,andJiliangTang. Adversarial attacks and defenses on graphs. ACM SIGKDD Explorations Newsletter, 22(2):19–34, 2021.
[8] Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. Is homophily a necessity for graph neural networks? arXiv preprint arXiv:2106.06134, 2021.
[9] Jiong Zhu, Junchen Jin, Donald Loveland, Michael T Schaub, and Danai Koutra. On the relationship between heterophily and robustness of graph neural networks. arXiv preprint arXiv:2106.07767, 2021.
[10] Hussain Hussain, Tomislav Duricic, Elisabeth Lex, Denis Helic, Markus Strohmaier, and Roman Kern. Structack: Structure-based adversarial attacks on graph neural net- works. arXiv preprint arXiv:2107.11327, 2021.
[11] Daixin Wang, Jianbin Lin, Peng Cui, Quanhui Jia, Zhen Wang, Yanming Fang, Quan Yu, Jun Zhou, Shuang Yang, and Yuan Qi. A semi-supervised graph attentive network for financial fraud detection. In 2019 IEEE International Conference on Data Mining (ICDM), pages 598–607. IEEE, 2019.
[12] Xiaoyu Yang, Yuefei Lyu, Tian Tian, Yifei Liu, Yudong Liu, and Xi Zhang. Rumor detection on social media with graph structured adversarial learning. In Proceedings of the twenty-ninth international conference on international joint conferences on artificial intelligence, pages 1417–1423, 2021.
[13] Liang Qu, Huaisheng Zhu, Ruiqi Zheng, Yuhui Shi, and Hongzhi Yin. Imgagn: Imbal- anced network embedding via generative adversarial graph networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1390–1398, 2021.
[14] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
[15] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural informa- tion processing systems, 29, 2016.
[16] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
[17] Enyan Dai, Charu Aggarwal, and Suhang Wang. Nrgnn: Learning a label noise re- sistant graph neural network on sparsely and noisily labeled graphs. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 227–236, 2021.
[18] MengLiu,ZhengyangWang,andShuiwangJi.Non-localgraphneuralnetworks.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
[19] Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom- gcn: Geometric graph convolutional networks. arXiv preprint arXiv:2002.05287, 2020.
[20] Jiong Zhu, Junchen Jin, Donald Loveland, Michael T Schaub, and Danai Koutra. How does heterophily impact the robustness of graph neural networks? theoretical connec- tions and practical implications. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2637–2647, 2022.
[21] Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial examples on graph data: Deep insights into attack and defense. arXiv preprint arXiv:1903.01610, 2019.
[22] Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD interna- tional conference on knowledge discovery & data mining, pages 1399–1407, 2019.
[23] Negin Entezari, Saba A Al-Sayouri, Amirali Darvishzadeh, and Evangelos E Papalex- akis. All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 169–177, 2020.
[24] Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pages 66–74, 2020.
[25] Enyan Dai, Wei Jin, Hui Liu, and Suhang Wang. Towards robust graph neural net- works for noisy graphs with sparse labels. In Proceedings of the Fifteenth ACM Inter- national Conference on Web Search and Data Mining, pages 181–191, 2022.
[26] Hoang NT, Choong Jun Jin, and Tsuyoshi Murata. Learning graph neural networks with noisy labels. arXiv preprint arXiv:1905.01591, 2019.
[27] Mengmei Zhang, Linmei Hu, Chuan Shi, and Xiao Wang. Adversarial label-flipping attack and defense for graph neural networks. In 2020 IEEE International Conference on Data Mining (ICDM), pages 791–800. IEEE, 2020.
[28] Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptive universal general- ized pagerank graph neural network. arXiv preprint arXiv:2006.07988, 2020.
[29] Deyu Bo, Xiao Wang, Chuan Shi, and Huawei Shen. Beyond low-frequency infor- mation in graph convolutional networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 3950–3957, 2021.
[30] Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, and Doina Precup. Is heterophily a real nightmare for graph neural networks to do node classification? arXiv preprint arXiv:2109.05641, 2021.
[31] Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effec- tive designs. Advances in Neural Information Processing Systems, 33:7793–7804, 2020.
[32] Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021.
[33] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 16000–16009, 2022.
[34] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[35] Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, and Chengqi Zhang. Adversarially regularized graph autoencoder for graph embedding. arXiv preprint arXiv:1802.04407, 2018.
[36] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016.
[37] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
[38] Zhenyu Hou, Xiao Liu, Yuxiao Dong, Chunjie Wang, Jie Tang, et al. Graphmae: Self- supervised masked graph autoencoders. arXiv preprint arXiv:2205.10803, 2022.
[39] Jiwoong Park, Minsik Lee, Hyung Jin Chang, Kyuewang Lee, and Jin Young Choi. Symmetric graph convolutional autoencoder for unsupervised graph representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6519–6528, 2019.
[40] Chun Wang, Shirui Pan, Guodong Long, Xingquan Zhu, and Jing Jiang. Mgae: Marginalized graph autoencoder for graph clustering. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 889–898, 2017.
[41] Wei Jin, Tyler Derr, Haochen Liu, Yiqi Wang, Suhang Wang, Zitao Liu, and Jiliang Tang. Self-supervised learning on graphs: Deep insights and new direction. arXiv preprint arXiv:2006.10141, 2020.
[42] Jerome H Friedman. On bias, variance, 0/1—loss, and the curse-of-dimensionality. Data mining and knowledge discovery, 1(1):55–77, 1997.
[43] Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. Au- tomating the construction of internet portals with machine learning. Information Re- trieval, 3(2):127–163, 2000.
[44] C Lee Giles, Kurt D Bollacker, and Steve Lawrence. Citeseer: An automatic citation indexing system. In Proceedings of the third ACM conference on Digital libraries, pages 89–98, 1998.
[45] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
[46] Jie Tang, Jimeng Sun, Chi Wang, and Zi Yang. Social influence analysis in large- scale networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 807–816, 2009.
[47] Xugang Wu, Huijun Wu, Xu Zhou, and Kai Lu. Cog: a two-view co-training frame- work for defending adversarial attacks on graph. arXiv preprint arXiv:2109.05558, 2021.
[48] Wei Jin, Tyler Derr, Yiqi Wang, Yao Ma, Zitao Liu, and Jiliang Tang. Node similarity preserving graph convolutional networks. In Proceedings of the 14th ACM international conference on web search and data mining, pages 148–156, 2021.
校內:2027-09-08公開