| 研究生: |
楊子萱 Yang, Tzu-Hsuan |
|---|---|
| 論文名稱: |
隱私與準確兼得:對比遺忘學習之可信任推薦系統 Contrastive Unlearning for Privacy-aware Recommender Systems |
| 指導教授: |
李政德
Li, Cheng-Te |
| 共同指導教授: |
張欣民
Chang, Hsing-Ming |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 數據科學研究所 Institute of Data Science |
| 論文出版年: | 2024 |
| 畢業學年度: | 112 |
| 語文別: | 英文 |
| 論文頁數: | 59 |
| 中文關鍵詞: | 推薦系統遺忘學習 、對比式學習 、邊遺忘學習 、節點遺忘學習 、安全和隱私 、機器遺忘學習 、圖神經網路 、連結預測 |
| 外文關鍵詞: | Recommendation Unlearning, Contrastive Learning, Edge Unlearning, Node Unlearning, Security and Privacy, Machine Unlearning, Graph Neural Networks, Link Prediction |
| 相關次數: | 點閱:54 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著網路的迅速發展,每日產生的數據量呈現指數級增長,模型從日益龐大的數據集中學習,其中可能包括未經授權的個人資訊。針對這一問題,"the right to be forgotten"的概念在《一般資料保護規則》(GDPR)和《加州消費者隱私保護法》(CCPA)中扮演了關鍵角色,賦予用戶決定其個人資訊使用方式的權利。因此,一個名為機器遺忘學習(Machine Unlearning)的研究領域誕生,其目的是在保護用戶權利的同時保持模型的可用性。
在本研究中,我們專注於推薦系統的遺忘學習(Recommendation Unlearning),並提出了一個嶄新的框架,名為推薦系統的對比遺忘學習(RCU)。我們將對比式學習融合到此框架中,旨在使模型在遺忘指定數據的同時,保留其餘數據的完整性。通過對模型隱藏層的深度推理,我們從中提取隱含資訊,並運用重新排序技術改進推薦結果,使其更好地契合用戶偏好,從而優化最終推薦效果。
我們進行了各種遺忘學習任務的實驗,並在不同比例的遺忘請求下評估了性能。結果顯示,我們的模型在面對大量遺忘請求時,不僅保持了更好的推薦準確性和穩定性,還確保了極高的時間效率。此外,我們進行了針對用戶的遺忘實驗以及針對商品的遺忘實驗,結果顯示RCU能夠滿足用戶和平台提供者的多樣需求,證明其在不同情境中的多功能性。
With the rapid development of the internet, the exponential growth of data has led to models learning from increasingly larger datasets, including unauthorized data. In response, concepts like "right to be forgotten" have been pivotal in GDPR and CCPA regulations, granting users the right to determine the use of their data. Consequently, a burgeoning topic known as Machine Unlearning has emerged, aiming to protect user rights while maintaining model effectiveness.
In this study, we focus on recommendation unlearning and propose a new framework called Recommendation Contrastive Unlearning (RCU). We integrate contrastive learning into our framework, aiming to enable the model to forget specified data while preserving the integrity of remaining data. By leveraging inference on the model's hidden layers, we extract implicit information from item nodes. Through re-ranking techniques, we aim to refine our recommendation results to better align with user preferences, thereby optimizing our final recommendations.
We conducts experiment various unlearning tasks and evaluated the performance under different proportions of unlearning requests. The results demonstrate that our model not only maintains better recommendation accuracy and robustness when faced with a massive influx of unlearning requests but also ensures excellent time efficiency. Furthermore, RCU accommodates diverse needs of users and platform providers, proving its versatility across different scenarios.
[1] Himan Abdollahpouri, Robin Burke, and Bamshad Mobasher. Managing popularity bias in recommender systems with personalized re-ranking, 2019.
[2] Himan Abdollahpouri, Masoud Mansoury, Robin Burke, and Bamshad Mobasher. The unfairness of popularity bias in recommendation, 2019.
[3] Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. CoRR, abs/1912.03817, 2019.
[4] PRESTON BUKATY. The California Consumer Privacy Act (CCPA): An implementation guide. IT Governance Publishing, 2019.
[5] Yinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy, pages 463–480, 2015.
[6] Chong Chen, Fei Sun, Min Zhang, and Bolin Ding. Recommendation unlearning. In Proceedings of the ACM Web Conference 2022, WWW ’22, page 2768–2777, New York, NY, USA, 2022. Association for Computing Machinery.
[7] Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, and Yang Zhang. Graph unlearning. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS '22. ACM, November 2022.
[8] Jiali Cheng, George Dasoulas, Huan He, Chirag Agarwal, and Marinka Zitnik. GNNDelete: A general strategy for unlearning in graph neural networks. In The Eleventh International Conference on Learning Representations, 2023.
[9] Eli Chien, Chao Pan, and Olgica Milenkovic. Certified graph unlearning, 2022.
[10] Eli Chien, Chao Pan, and Olgica Milenkovic. Efficient model updates for approximate unlearning of graph-structured data. In The Eleventh International Conference on Learning Representations, 2023.
[11] Vikram S. Chundawat, Ayush K. Tarun, Murari Mandal, and Mohan S. Kankanhalli. Zero-shot machine unlearning. CoRR, abs/2201.05629, 2022.
[12] Weilin Cong and Mehrdad Mahdavi. Efficiently forgetting what you have learned in graph representation learning via projection, 2023.
[13] European Commission. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance), 2016.
[14] Rohit Gandikota, Joanna Materzynska, Jaden Fiotto-Kaufman, and David Bau. Erasing concepts from diffusion models, 2023.
[15] Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. Deepfm: A factorization-machine based neural network for CTR prediction. CoRR, abs/1703.04247, 2017.
[16] William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. CoRR, abs/1706.02216, 2017.
[17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks, 2016.
[18] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. CoRR, abs/2002.02126, 2020.
[19] Jinghan Jia, Jiancheng Liu, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, and Sijia Liu. Model sparsity can simplify machine unlearning. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
[20] Jinghan Jia, Jiancheng Liu, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, and Sijia Liu. Model sparsity can simplify machine unlearning, 2024.
[21] Wei Jiang, Xinyi Gao, Guandong Xu, Tong Chen, and Hongzhi Yin. Challenging low homophily in social recommendation, 2024.
[22] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. CoRR, abs/1609.02907, 2016.
[23] Anastasiia Klimashevskaia, Dietmar Jannach, Mehdi Elahi, and Christoph Trattner. A survey on popularity bias in recommender systems, 2023.
[24] Dominik Kowald, Markus Schedl, and Elisabeth Lex. The unfairness of popularity bias in music recommendation: A reproducibility study, 2019.
[25] Weikai Li, Zhiping Xiao, Xiao Luo, and Yizhou Sun. Fast inference of removal-based node influence. arXiv preprint arXiv:2403.08333, 2024.
[26] Yuyuan Li, Chaochao Chen, Xiaolin Zheng, Yizhao Zhang, Biao Gong, and Jun Wang. Selective and collaborative influence function for efficient recommendation unlearning, 2023.
[27] Yuyuan Li, Xiaolin Zheng, Chaochao Chen, and Junlin Liu. Making recommender systems forget: Learning and unlearning for erasable recommendation, 2022.
[28] Yixin Liu, Chenrui Fan, Pan Zhou, and Lichao Sun. Unlearnable graph: Protecting graphs from unauthorized exploitation, 2023.
[29] Zheyuan Liu, Guangyao Dou, Yijun Tian, Chunhui Zhang, Eli Chien, and Ziwei Zhu. Breaking the trilemma of privacy, utility, efficiency via controllable machine unlearning. ArXiv, abs/2310.18574, 2023.
[30] Guanghui Ma, Chunming Hu, Ling Ge, Junfan Chen, Hong Zhang, and Richong Zhang. Towards robust false information detection on social networks with contrastive learning. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, CIKM ’22, page 1441–1450, New York, NY, USA, 2022. Association for Computing Machinery.
[31] Mohammadmehdi Naghiaei, Hossein A. Rahmani, and Mahdi Dehghan. The unfairness of popularity bias in book recommendation, 2022.
[32] Thanh Tam Nguyen, Thanh Trung Huynh, Phi Le Nguyen, Alan Wee-Chung Liew, Hongzhi Yin, and Quoc Viet Hung Nguyen. A survey of machine unlearning, 2022.
[33] Changhua Pei, Yi Zhang, Yongfeng Zhang, Fei Sun, Xiao Lin, Hanxiao Sun, Jian Wu, Peng Jiang, and Wenwu Ou. Personalized re-ranking for recommendation, 2019.
[34] Yanru Qu, Han Cai, Kan Ren, Weinan Zhang, Yong Yu, Ying Wen, and Jun Wang. Product-based neural networks for user response prediction. CoRR, abs/1611.00144, 2016.
[35] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback, 2012.
[36] Paul Resnick and Hal R. Varian. Recommender systems. Commun. ACM, 40(3):56–58, mar 1997.
[37] Xiaoyuan Su and Taghi M. Khoshgoftaar. A survey of collaborative filtering techniques. Adv. in Artif. Intell., 2009, jan 2009.
[38] Jiajun Tan, Fei Sun, Ruichen Qiu, Du Su, and Huawei Shen. Unlink to unlearn: Simplifying edge unlearning in gnns. In Companion Proceedings of the ACM on Web Conference 2024, WWW ’24, page 489–492, New York, NY, USA, 2024. Association for Computing Machinery.
[39] Ayush K Tarun, Vikram S Chundawat, Murari Mandal, and Mohan Kankanhalli. Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems, 2023.
[40] Cheng-Long Wang, Mengdi Huai, and Di Wang. Inductive graph unlearning, 2023.
[41] Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. Neural graph collaborative filtering. CoRR, abs/1905.08108, 2019.
[42] Alexander Warnecke, Lukas Pirch, Christian Wressnegger, and Konrad Rieck. Machine unlearning of features and labels, 2023.
[43] Jiancan Wu, Yi Yang, Yuchun Qian, Yongduo Sui, Xiang Wang, and Xiangnan He. Gif: A general graph unlearning strategy via influence function. In Proceedings of the ACM Web Conference 2023, WWW '23. ACM, April 2023.
[44] Kun Wu, Jie Shen, Yue Ning, Ting Wang, and Wendy Hui Wang. Certified edge unlearning for graph neural networks. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’23, page 2606–2617, New York, NY, USA, 2023. Association for Computing Machinery.
[45] Zihao Wu, Xin Wang, Hong Chen, Kaidong Li, Yi Han, Lifeng Sun, and Wenwu Zhu. Diff4rec: Sequential recommendation with curriculum-scheduled diffusion augmentation. In Proceedings of the 31st ACM International Conference on Multimedia, MM’23, page 9329–9335, New York, NY, USA, 2023. Association for Computing Machinery.
[46] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1):4–24, January 2021.
[47] Tzu-Hsuan Yang and Cheng-Te Li. When contrastive learning meets graph unlearning: Graph contrastive unlearning for link prediction. In 2023 IEEE International Conference on Big Data (BigData), pages 6025–6032, Los Alamitos, CA, USA, dec 2023. IEEE Computer Society.
[48] Jiaxuan You, Xiaobai Ma, Daisy Yi Ding, Mykel J. Kochenderfer, and Jure Leskovec. Handling missing data with graph representation learning. CoRR, abs/2010.16418, 2020.
[49] Junliang Yu, Hongzhi Yin, Xin Xia, Lizhen Cui, and Quoc Viet Hung Nguyen. Graph augmentation-free contrastive learning for recommendation. CoRR, abs/2112.08679, 2021.
[50] Weinan Zhang, Tianming Du, and Jun Wang. Deep learning over multi-field categorical data: A case study on user response prediction. CoRR, abs/1601.02376, 2016.
[51] Yang Zhang, Zhiyu Hu, Yimeng Bai, Fuli Feng, Jiancan Wu, Qifan Wang, and Xiangnan He. Recommendation unlearning via influence function, 2023.
[52] Wenyue Zheng, Ximeng Liu, Yuyang Wang, and Xuanwei Lin. Graph unlearning using knowledge distillation. In Information and Communications Security: 25th International Conference, ICICS 2023, Tianjin, China, November 18–20, 2023, Proceedings, page 485–501, Berlin, Heidelberg, 2023. Springer-Verlag.
[53] Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. AI open, 1:57–81, 2020.
校內:2029-08-21公開