| 研究生: |
蔡昌銘 Tsai, Chang-Ming |
|---|---|
| 論文名稱: |
自監督動態特徵表示學習及其應用 Self-Supervised Dynamic Embedding and Its Applications |
| 指導教授: |
李政德
Li, Cheng-Te |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 數據科學研究所 Institute of Data Science |
| 論文出版年: | 2021 |
| 畢業學年度: | 109 |
| 語文別: | 中文 |
| 論文頁數: | 36 |
| 中文關鍵詞: | 推薦系統 、動態嵌入 、自監督學習 |
| 外文關鍵詞: | Recommendaion System, Dynamic Embedding, Self-Supervised Learning |
| 相關次數: | 點閱:152 下載:15 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
推薦系統在現今變得越來越重要,到處都能見到推薦系統的蹤跡,尤其是在社交網路中,例如:Facebook、YouTube、Twitter,甚至Netflix也能看到推薦系統佔有一席之地。起初,許多推薦系統的研究通常從二部圖中學習user與item的關係,他們會將user與item的特徵表示投射到特徵空間中,然而,絕大多數的研究都將user及item的特徵表示視為靜態而忽略了時間屬性,在現實生活中,user的偏好通常會隨著時間的推移而有所變化,item的相關屬性也會因為時空背景不同而有不同的表現,因此不論是user或是item,模型能隨著時間變化而對特徵表示有動態的調整才是比較符合現實狀況。
近幾年有些研究開始使用Recurrent Neural Network (RNN) 模型來學習特徵表示,成功解決了過往研究缺乏考慮時間因素的問題,但單純利用RNN學習特徵表示,可能會讓模型對於交互紀錄過於敏感。在本篇研究中,我們引入了近期在影像辨識及自然語言處理領域蓬勃發展的自監督學習 (Self-supervised Learning),提出了一個多任務自監督學習模型,透過自監督學習動態特徵表示,可以使模型更加的穩健,大大降低對交互變化的敏感程度。
Recommendation systems are becoming more and more important nowadays, and traces of recommendation systems can be seen everywhere, especially in social networks, such as Facebook, YouTube, Twitter, and even Netflix. Recommendation systems can also see their place. In the beginning, many researches on recommendation systems usually learn the relationship between user and item from bipartite graphs. They project the embedding of user and item into the embedding space. However, most studies treat the embedding of user and item as static and ignore the time attribute. In real life, the user’s features usually change over time, and the related attributes of the item will also have different features due to different temporal and spatial backgrounds. Whether users or items, the model can dynamically adjust embedding representation over time is more suitable in the real world.
In recent years, some studies have begun to use the Recurrent Neural Network (RNN) model to learn embedding, successfully solving the problem of lack of consideration of time factors in previous studies, but simply using RNN to learn embedding may make the model too sensitive to interaction records. In this research, we introduced self-supervised learning, which has recently flourished in the field of computer vision and natural language processing, and proposed a multi-task self-supervised learning model. Through self-supervised learning of Dynamic Embedding, the model can be made more robust, and the sensitivity to interaction changes can be greatly reduced.
[1] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. CoRR, abs/2002.05709, 2020.
[2] HengTze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. Wide & deep learning for recommender systems. CoRR, abs/1606.07792, 2016.
[3] Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. Deepfm: A factorizationmachine based neural network for CTR prediction. CoRR, abs/1703.04247, 2017.
[4] Xiangnan He and TatSeng Chua. Neural factorization machines for sparse predictive analytics. CoRR, abs/1708.05027, 2017.
[5] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. CoRR, abs/2002.02126, 2020.
[6] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and TatSeng Chua. Neural collaborative filtering. CoRR, abs/1708.05031, 2017.
[7] Wei Jin, Tyler Derr, Haochen Liu, Yiqi Wang, Suhang Wang, Zitao Liu, and Jiliang Tang. Selfsupervised learning on graphs: Deep insights and new direction. CoRR, abs/2006.10141, 2020.
[8] Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Revisiting selfsupervised visual representation learning. CoRR, abs/1901.09005, 2019.
[9] Srijan Kumar, Xikun Zhang, and Jure Leskovec. Predicting dynamic embedding trajectory in temporal interaction networks. CoRR, abs/1908.01207, 2019.
[10] Hankook Lee, Sung Ju Hwang, and Jinwoo Shin. Rethinking data augmentation: Selfsupervision and selfdistillation. CoRR, abs/1910.05872, 2019.
[11] Xiaohan Li, Mengqi Zhang, Shu Wu, Zheng Liu, Liang Wang, and Philip S. Yu. Dynamic graph collaborative filtering. CoRR, abs/2101.02844, 2021.
[12] Arantxa Casanova...etc. Petar Veličković, Guillem Cucurull. Graph attention networks. ICLR, 2018.
[13] Yanru Qu, Han Cai, Kan Ren, Weinan Zhang, Yong Yu, Ying Wen, and Jun Wang. Productbased neural networks for user response prediction. CoRR, abs/1611.00144, 2016.
[14] Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael M. Bronstein. Temporal graph networks for deep learning on dynamic graphs. CoRR, abs/2006.10637, 2020.
[15] Ying Shan, T. Ryan Hoens, Jian Jiao, Haijing Wang, Dong Yu, and JC Mao. Deep crossing: Webscale modeling without manually crafted combinatorial features. KDD ’16, page 255–262, 2016.
[16] Derek Zhiyuan Cheng Tiansheng Yao, Xinyang Yi and Felix Yu...etc. Selfsupervised learning for deep models in recommendations. arXiv:2007.12865, 2020.
[17] Rianne van den Berg, Thomas N. Kipf, and Max Welling. Graph convolutional matrix completion, 2017.
[18] Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and TatSeng Chua. Neural graph collaborative filtering. CoRR, abs/1905.08108, 2019.
[19] Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. Selfsupervised graph learning for recommendation. CoRR, abs/2010.10783, 2020.
[20] Xin Xia, Hongzhi Yin, Junliang Yu, Qinyong Wang, Lizhen Cui, and Xiangliang Zhang. Selfsupervised hypergraph convolutional networks for sessionbased recommendation. CoRR, abs/2012.06852, 2020.
[21] Xin Xin, Alexandros Karatzoglou, Ioannis Arapakis, and Joemon M. Jose. Selfsupervised reinforcement learning for recommender systems. CoRR, abs/2006.05779, 2020.
[22] Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. CoRR, abs/2010.13902, 2020.
[23] Junliang Yu, Hongzhi Yin, Jundong Li, Qinyong Wang, Nguyen Quoc Viet Hung, and Xiangliang Zhang. Selfsupervised multichannel hypergraph convolutional network for social recommendation. CoRR, abs/2101.06448, 2021.
[24] Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. S4l: Selfsupervised semisupervised learning. CoRR, abs/1905.03670, 2019.
[25] Muhan Zhang and Yixin Chen. Inductive graph pattern learning for recommender systems based on a graph neural network. CoRR, abs/1904.12058, 2019.
[26] Weinan Zhang, Tianming Du, and Jun Wang. Deep learning over multifield categorical data: A case study on user response prediction. CoRR, abs/1601.02376, 2016.
[27] Zhen Zhang, Jiajun Bu, Martin Ester, Jianfeng Zhang, Chengwei Yao, Zhao Li, and Can Wang. Learning temporal interaction graph embedding via coupled memory networks. WWW ’20, page 3049–3055, 2020.
[28] Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, and JiRong Wen. S^3rec: Selfsupervised learning for sequential recommendation with mutual information maximization. CoRR, abs/2008.07873, 2020.
校內:立即公開