簡易檢索 / 詳目顯示

研究生: 邱怡瑄
Chiou, Yi-Hsuan
論文名稱: 結合機率矩陣解法以及深度學習模型應用在推薦系統
Combing Probabilistic Matrix Factorization withDeep Learning Networks in Recommender System
指導教授: 劉任修
Liu, Ren-Shiou
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理研究所
Institute of Information Management
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 60
中文關鍵詞: 深度學習GRU矩陣分解推薦系統
外文關鍵詞: Deep Learning, Gated Recurrent Unit, Attention, Matrix Factorization, Recommender System
相關次數: 點閱:176下載:14
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來隨著巨量資料的蓬勃發展以及資訊科技快速的進步,許多企業網路
    或電子商務店家,會根據消費者購買的紀錄、偏好來推薦消費者適合的產品,因此如何在眾多的項目中找到自己需要的商品就顯得格外重要,推薦系統就是在這個環境中逐漸發展成型。推薦系統(Personalized Recommender Systems)主要分成兩大類,第一類是基於內容的推薦系統(Content-based Recommendation),為系統基於用戶的評價、個人資料去學習特定用戶的偏好, 並且推薦相似度高的商品給其用戶,當用戶的瀏覽紀錄越多其推薦效果越佳。第二類是協同過濾推薦系(Collaborative Filtering Recommendation),為系統利用用戶彼此之間或是項目與項目之間的關聯性來推薦商品,其概念為若用戶喜歡的東西類似,則會推薦特定用戶類似的東西,此方法的優點為不需要用戶的評分紀錄即可運作而
    第三類方法為混合式推薦系統,此方法主要利用矩陣分解結合模型文字提取模型本論文主要採用第三類混合式推薦系統,能更加準確的預測用戶對商品的評分。
    因此本論文提出階層式深度學習網路結合機率矩陣分解法的模型,
    首先我們先擷取出用戶評論的資料集,輸入到我們建構的深度學習網路架
    構,經由Attention機制給予每一字詞給予權重分配,關注每篇評論的字詞權重,最後與矩陣分解法的潛在隱藏因子向量做交互訓練。RMSE為論文的實驗指標,效果明顯優於第四章的各大方法,本論文將取五個隨機種子產生器的平均值當作實驗效果指標,可得知我們的效能比Probabilistic Matrix Factorization好8%,比Convolutional Matrix Factorization for Document Context-Aware
    Recommendation好5%,比Dual-Regularized Matrix Factorization with Deep Neural Networks Item好4%,因此更能精準預測使用者對商品的評分。

    Before the recommender system generating recommendation results, we need to filter out useful information from the big data, but the information may be explicitor implicit. The results of the traditional Matrix Factorization, usually consider the accuracy of the first few results. Therefore, considering the global covariance is often inaccurate. Studies have shown that the probabilistic matrix factorization can produce good recommendations, but the disadvantage is that it only considered the
    product rating in the model. And in other studies, the recommended results using Probabilistic Matrix Factorization are not well explained. Exampleblei et al. (2003) proposed the Latent Dirichlet Allocation to generate the topic model, which always ignores the context of each word. Therefore, Wang and Blei (2011) proposed the
    Collaborative Topic Modeling for recommending scientific articles, which combining the Probabilistic Matrix Factorization model and the Latent Dirichlet Allocation
    model to generate the user recommendation. The effect is significantly better than the original Matrix Factorization method and has more accurate predictions.
    To clarify the user’s preferences, the methods related to sentimental text analysis can perform well and also can extract the product feature and sentimental text from the comment. The natural language process has been extensively researched since the millennium. The deep learning network has excellent performance in text mining and topic analysis. Most of the text sentiment analysis methods can’t give the order of the best or bad in the feature of the user’s favorite products. Therefore, this thesis combines deep learning sentiment analysis and Probabilistic Matrix Factorization to find out the user’s true preferences.

    摘要i EXTENDED ABSTRACT iii 誌謝xii 目錄xiii 表目錄xvi 圖目錄xvii 1 緒論1 1.1 背景. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 研究目的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 貢獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 論文架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 相關文獻探討5 2.1 基於內容推薦系統. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 基於協同過濾推薦系統. . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 混合模型推薦系統. . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3.1 矩陣分解結合LDA模型方法. . . . . . . . . . . . . . . . . . 10 2.3.2 矩陣分解結合降噪模型方法. . . . . . . . . . . . . . . . . . 11 2.3.3 矩陣分解結合深度學習網路模型方法. . . . . . . . . . . . . 11 2.3.3.1 矩陣分解法結合生成對抗網路框架. . . . . . . . . 12 2.3.3.2 矩陣分解法結合卷積神經網路. . . . . . . . . . . 15 2.3.3.3 矩陣分解法結合雙向正規化網路. . . . . . . . . . 16 2.4 Attention提取方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.1 LSTM方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.2 Linear方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.5 小結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3 研究方法21 3.1 方法架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 機率矩陣分解法架構. . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.3 Hierarchical Attention Networks神經網路架構. . . . . . . . . . . . . 24 3.3.1 雙向GRU閘門機制. . . . . . . . . . . . . . . . . . . . . . . 28 3.3.2 Word Embedding詞嵌入擷取. . . . . . . . . . . . . . . . . . 29 3.3.3 Word Encoder詞向量編碼輸入. . . . . . . . . . . . . . . . . 30 3.3.4 Word Attention單詞注意層. . . . . . . . . . . . . . . . . . . 32 3.3.5 Sentence Encoder句向量編碼. . . . . . . . . . . . . . . . . . 33 3.3.6 Sentence Attention單句注意層. . . . . . . . . . . . . . . . . 34 3.3.7 Projection Layer投影降維層. . . . . . . . . . . . . . . . . . . 35 3.3.8 Aspect Sentiment Level輸出Aspect層. . . . . . . . . . . . . . 36 3.4 機率矩陣分解結合階層式深度學習交互訓練架構. . . . . . . . . . 37 4 實驗與分析42 4.1 實驗流程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2 實驗資料概述. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.3 實驗環境與參數設定. . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.4 實驗指標. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.5 方法效能比較. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5 結論與未來發展56 References 57

    Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by jointly
    learning to align and translate. arXiv e-prints, abs/1409.0473.
    Bouvrie, J. (2006). Notes on convolutional neural networks.
    Cho, K., van Merri¨enboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk,
    H., and Bengio, Y. (2014a). Learning phrase representations using RNN encoder–
    decoder for statistical machine translation. In Proceedings of the 2014 Conference
    on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734,
    Doha, Qatar. Association for Computational Linguistics.
    Cho, K., van Merri¨enboer, B., G¨ulc¸ehre, C¸ ., Bahdanau, D., Bougares, F., Schwenk,
    H., and Bengio, Y. (2014b). Learning phrase representations using rnn encoder–
    decoder for statistical machine translation. In Proceedings of the 2014 Conference
    on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734,
    Doha, Qatar. Association for Computational Linguistics.
    Dong, X., Yu, L., Wu, Z., Sun, Y., Yuan, L., and Zhang, F. (2017). A hybrid collaborative
    filtering model with deep structure for recommender systems. In AAAI.
    Kim, D., Park, C., Oh, J., Lee, S., and Yu, H. (2016). Convolutional matrix factorization
    for document context-aware recommendation. In Proceedings of the 10th ACM
    Conference on Recommender Systems, RecSys ’16, pages 233–240, New York, NY,
    The USA. ACM.
    Koren, Y., Bell, R., and Volinsky, C. (2009). Matrix factorization techniques for recommender
    systems. Computer, 42(8):30–37.
    Liang, D., Krishnan, R. G., Hoffman, M. D., and Jebara, T. (2018). Variational autoencoders
    for collaborative filtering. In Proceedings of the 2018 World Wide Web
    The conference, WWW ’18, pages 689–698, Republic and Canton of Geneva, Switzerland.
    International World Wide Web Conferences Steering Committee.
    McAuley, J. and Leskovec, J. (2013). Hidden factors and hidden topics: understanding
    rating dimensions with review text. In Proceedings of the 7th ACM Conference on
    Recommender Systems, RecSys ’13, pages 165–172, New York, NY, USA. ACM.
    Ning, X. and af, L. Y. (2018). Rating prediction via generative convolutional neural
    networks based regression. Pattern Recognition Letters.
    Qian, N. (1999). On the momentum term in gradient descent learning algorithms. Neural
    Networks, 12(1):145–151.
    Resnick, P., Iacovou, N., Suchak, M., Bergstrom, P., and Riedl, J. (1994). Groupon's:
    Open architecture for collaborative filtering of netnews. In Proceedings of the
    1994 ACM Conference on Computer Supported cooperative work, CSCW ’94, pages
    175–186, New York, NY, USA. ACM.
    Salakhutdinov, R. and Mnih, A. (2007). Probabilistic matrix factorization. In Proceedings
    of the 20th International Conference on Neural Information Processing Systems,
    NIPS’07 pages 1257–1264, USA. Curran Associates Inc.
    Sarwar, B., Karypis, G., Konstan, J., and Riedl, J. (2001). Item-based collaborative
    filtering recommendation algorithms. In Proceedings of the 10th International Conference
    on World Wide Web, WWW’01 pages 285–295, New York, NY, USA. ACM.
    Schafer, J. B., Frankowski, D., Herlocker, J., and Sen, S. (2007). Collaborative Filtering
    Recommender Systems, pages 291–324. Springer Berlin Heidelberg, Berlin,
    Heidelberg.
    Sukhbaatar, S., Szlam, A., Weston, J., and Fergus, R. (2015). Weakly supervised memory
    networks. CoRR, abs/1503.08895.
    Tang, D., Qin, B., and Liu, T. (2016). Aspect level sentiment classification with deep
    memory network. In Proceedings of the 2016 Conference on Empirical Methods in
    Natural Language Processing, pages 214–224, Austin, Texas. Association for Computational
    Linguistics.
    Wang, C. and Blei, D. M. (2011). Collaborative topic modeling for recommending scientific
    articles. In Proceedings of the 17th ACM SIGKDD International Conference
    on Knowledge Discovery and Data Mining, KDD ’11, pages 448–456, New York,
    NY, USA. ACM.
    Wang, H., Shi, X., and Yeung, D. (2016a). Collaborative recurrent autoencoder: Recommend
    while learning to fill in the blanks. CoRR, abs/1611.00454.
    Wang, H., Wang, N., and Yeung, D. (2014). Collaborative deep learning for recommender
    systems. CoRR, abs/1409.2944.
    Wang, Y., Huang, M., Zhu, X., and Zhao, L. (2016b). Attention-based lstm for aspectlevel
    sentiment classification. In Proceedings of the 2016 Conference on Empirical
    Methods in Natural Language Processing, pages 606–615, Austin, Texas. Association
    for Computational Linguistics.
    Wu, C., Wang, J., Liu, J., and Liu, W. (2016). Recurrent neural network-based recommendation
    for time heterogeneous feedback. Knowledge-Based Systems, 109:90 –
    103.
    Wu, H., Zhang, Z., Yue, K., Zhang, B., He, J., and Sun, L. (2018). Dual-regularized matrix
    factorization with deep neural networks for recommender systems. Knowledge-
    Based Systems, 145:46 – 58.
    Yang, Z., Yang, D., Dyer, C., He, X., Smola, A. J., and Hovy, E. H. (2016). Hierarchical
    attention networks for document classification. In HLT-NAACL.
    Zhang, L., Luo, T., Zhang, F., and Wu, Y. (2018). A recommendation model based on
    deep neural network. IEEE Access, 6:9454–9463.

    下載圖示 校內:2020-10-10公開
    校外:2020-10-10公開
    QR CODE