| 研究生: |
葉家任 Yeh, Chia-Jen |
|---|---|
| 論文名稱: |
TVaR:時間與變分感知機制用於增強序列推薦 TVaR: Time and Variation-Aware Mechanism for Enhanced Sequential Recommendation |
| 指導教授: |
高宏宇
Kao, Hung-Yu |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
| 論文出版年: | 2023 |
| 畢業學年度: | 111 |
| 語文別: | 英文 |
| 論文頁數: | 42 |
| 中文關鍵詞: | 序列推薦系統 、變分自編碼器 、雙向 Transformer |
| 外文關鍵詞: | Sequential Recommendation, Variational Autoencoder, Bidirectional Transformer |
| 相關次數: | 點閱:119 下載:1 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在使用者偏好這一不斷變化的領域內,順序推薦系統已被證實具有至關重要的作用。 這些系統能夠利用用戶過去與平台或產品的互動數據,以提供即時、個人化的推薦。 為了滿足這種不斷演化的需求,我們提出了一個全新的機制-時間和變化感知機制用於增強順序推薦(Time and Variation-Aware Mechanism for Enhanced Sequential Recommendation,簡稱TVaR)。 TVaR 機制在設計上精妙而優雅,但其前提卻非常簡單:僅透過分析使用者的交易歷史來提煉和優化推薦結果。 這在技術實施上相當直接,也意味著該機制具有很高的實用性和可擴展性。 到目前為止,據我們所知,我們是第一組採用變分自編碼器(Variational Autoencoder,VAE)來識別物品中的潛在因子,並將這些因子與時間戳嵌入相結合的研究人員。這種結合方法具有突破性意義,因為它無縫地整合了資料中固有的時間和潛在維度,從而更全面地理解了使用者的行為和偏好。 為了進一步提升系統效能,我們也採用了資料增強技術,以強化雙向 Transformer 模型的表現。實證研究的結果對 TVaR 的優越性提供了有力支持。 我們發現,當使用 VAE 產生的嵌入和時間嵌入與雙向 Transformer 模型進行精心整合後,系統的效能表現有了顯著提升。 這項發現為該領域設定了新的、更高的基準。 TVaR 不僅在單一的測試資料集中表現出色,而且一貫地超越了目前最先進的推薦系統方法。 證明了 TVaR 的通用性和長期應用的潛力。 整體而言,TVaR 透過其精妙的設計和高度的實用性,為順序推薦系統領域帶來了新的可能性和觀點。 它成功地解決了多個核心問題,包括如何更準確地捕捉使用者偏好,如何有效地整合時間和潛在訊息,以及如何透過資料增強來提高模型效能。 因此,TVaR 構成了一個值得進一步研究和應用的框架。
In the dynamic realm of user preferences, sequential recommendation systems have proven essential, leveraging past interactions to provide real-time, personalized suggestions. In response to this evolving need, we present the Time and Variation-Aware Mechanism for Enhanced Sequential Recommendation (TVaR). This mechanism, elegantly designed yet simple in its premise, refines recommendations by solely capitalizing on user transaction histories. To our knowledge, we are the first to deploy a Variational Autoencoder (VAE) to discern latent factors from items and subsequently meld these with timestamp embeddings. This approach seamlessly encapsulates both the temporal and latent dimensions inherent in the data. To further bolster its efficacy, data augmentation techniques are employed, fortifying the bidirectional-Transformer model's performance. Empirical evidence supports TVaR's prominence, revealing that when VAE embeddings and Time embeddings are harmoniously paired with the bidirectional-Transformer model, a notable enhancement in performance is observed, setting a new, refined benchmark in the domain. Significantly, TVaR consistently surpasses prevailing state-of-the-art methodologies.
[1] Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016.
[2] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
[3] Daniel Billsus, Michael J Pazzani, et al. Learning collaborative information filters. In Icml, volume 98, pages 4654, 1998.
[4] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pretraining of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[5] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
[6] Hanwen Du, Hui Shi, Pengpeng Zhao, Deqing Wang, Victor S Sheng, Yanchi Liu, Guanfeng Liu, and Lei Zhao. Contrastive learning with bidirectional transformers for sequential recommendation. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 396–405, 2022.
[7] Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines, 30:681–694, 2020.
[8] F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4):1–19, 2015.
[9] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web, pages 173–182, 2017.
[10] Wang-Cheng Kang and Julian McAuley. Self-attentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM), pages 197–206. IEEE, 2018.
[11] Walid Krichene and Steffen Rendle. On sampled metrics for item recommendation. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pages 1748–1757, 2020.
[12] Jiacheng Li, Yujie Wang, and Julian McAuley. Time interval aware self-attention for sequential recommendation. In Proceedings of the 13th international conference on web search and data mining, pages 322–330, 2020
[13] Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 world wide web conference, pages 689–698, 2018.
[14] Zhiwei Liu, Yongjun Chen, Jia Li, Philip S Yu, Julian McAuley, and Caiming Xiong. Contrastive self-supervised sequential recommendation with robust augmentation. arXiv preprint arXiv:2108.06479, 2021.
[15] Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 43–52, 2015.
[16] Andriy Mnih and Russ R Salakhutdinov. Probabilistic matrix factorization. Advances in neural information processing systems, 20, 2007.
[17] Aleksandr Petrov and Craig Macdonald. A systematic review and replicability study of bert4rec for sequential recommendation. In Proceedings of the 16th ACM Conference on Recommender Systems, pages 436–447, 2022.
[18] Ruihong Qiu, Zi Huang, Hongzhi Yin, and Zijian Wang. Contrastive learning for representation degeneration problem in sequential recommendation. In Proceedings of the fifteenth ACM international conference on web search and data mining, pages 813–823, 2022.
[19] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551, 2020.
[20] Ahmed Rashed, Shereen Elsayed, and Lars Schmidt-Thieme. Carca: Context and attribute-aware next-item recommendation via cross-attention. arXiv preprint arXiv:2204.06519, 2022.
[21] Noveen Sachdeva, Giuseppe Manco, Ettore Ritacco, and Vikram Pudi. Sequential variational autoencoders for collaborative filtering. In Proceedings of the twelfth ACM international conference on web search and data mining, pages 600–608, 2019.
[22] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.
[23] Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management, pages 1441–1450, 2019.
[24] Quoc-Tuan Truong, Aghiles Salah, and Hady W Lauw. Bilateral variational autoencoder for collaborative filtering. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 292–300, 2021.
[25] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[26] Yu Wang, Hengrui Zhang, Zhiwei Liu, Liangwei Yang, and Philip S Yu. Contrastvae: Contrastive variational autoencoder for sequential recommendation. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 2056–2066, 2022.
[27] Xu Xie, Fei Sun, Zhaoyang Liu, Shiwen Wu, Jinyang Gao, Jiandong Zhang, Bolin Ding, and Bin Cui. Contrastive learning for sequential recommendation. In 2022 IEEE 38th international conference on data engineering (ICDE), pages 1259–1273. IEEE, 2022.
[28] Zhe Xie, Chengxuan Liu, Yichi Zhang, Hongtao Lu, Dong Wang, and Yue Ding. Adversarial and contrastive variational autoencoder for sequential recommendation. In Proceedings of the Web Conference 2021, pages 449–459, 2021.
[29] Qihang Zhao. Resetbert4rec: A pre-training model integrating time and user historical behavior for sequential recommendation. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval, pages 1812–1816, 2022.
[30] Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, and Ji-Rong Wen. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In Proceedings of the 29th ACM international conference on information & knowledge management, pages 1893–1902, 2020. 42