簡易檢索 / 詳目顯示

研究生: 李亦薇
Lee, I-Wei
論文名稱: 使用多重Attention神經網路做社群網路中發文可信度評估
A Method for Evaluating Credibility of User's Posting in Social Network Using Multi-Attention Neural Network
指導教授: 劉任修
Liu, Ren-Shiou
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理研究所
Institute of Information Management
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 37
中文關鍵詞: 可信度社群網路深度學習Attention機制立場偵測
外文關鍵詞: Credibility, Social Network, Deep Learning, Attention Mechanism, Stance Detection
相關次數: 點閱:158下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 社群網路中的訊息是現代許多人獲取知識、新聞的地方,然而這些社會大眾所提供的資訊可能參雜著未經證實的訊息。在網路平台上,人人都有發言權利的優點也可能變成缺點,許多不實或帶有惡意的訊息漸漸充斥整個社群網路中,因此辨別社群網路中訊息的真偽就很重要。本論文即是希望藉由用戶過去的發文與其他用戶的發文其是否具有立場一致性,來評估用戶的最新發文可信度。我們結合並延伸cite{Popat2018}與cite{Ma2019}中基於Attention機制的方法,判斷用戶在多個主題上的發文一致性,將用戶過去的發文與其他用戶的發文視為最新發文的證據,評估其說法的一致性,期望經過計算後的可信度值越高,越接近真實事件。最後在我們的實驗結果中顯示,我們所提出的方法可改善一般深度學習模型的預測結果,也證實基於發文一致性的評估方法有利於預測社群網路中的發文可信度。

    Social networks are the place where people get knowledge and news nowadays. However, the advantages that everyone could obtain information might also become disadvantages on the platform. Many unverified messages are gradually flooding the whole social platform, and lead to some severs impacts. Therefore, it is important to distinguish the authenticity of messages there and the problem we want to deal with. This paper is intended to assess the credibility of users' latest messages by the consistency between users' past posts and other users' posts. We combine and extend the method based on the Attention Mechanism proposed by cite{Popat2018} and cite{Ma2019} to determine the coherence of users' posts on multiple topics than evaluate the credibility of the users' latest posts. We expect that after the training process of the model, the higher credit figures, the closer to the real event.

    摘要i EXTENDED ABSTRACT ii 誌謝vi 目錄vii 表目錄ix 圖目錄x 1 緒論1 1.1 背景及動機2 1.2 研究目的3 1.3 貢獻3 1.4 論文架構4 2 相關文獻探討5 2.1 內容可信度5 2.1.1 資訊檢索6 2.1.2 文本語意分析7 2.1.3 主題內容分析7 2.2 用戶可信度8 2.2.1 用戶行為與背景資料8 2.2.2 用戶關係與互動9 2.2.3 多重標準10 2.3 立場檢測11 2.4 小結12 3 研究方法13 3.1 問題描述13 3.2 模型架構14 3.3 方法描述16 3.3.1 發文表示法16 3.3.2 Target Attention模型17 3.3.3 基於一致性的Attention模型18 3.3.4 整體的模型訓練19 4 實驗與分析21 4.1 實驗架構及步驟21 4.2 資料集與資料處理21 4.3 實驗環境與參數設定23 4.4 實驗結果24 4.4.1 衡量指標24 4.4.2 實驗結果與參數分析25 5 結論與未來發展31 參考文獻33

    Al-Khalifa, H. S. and Al-Eidan, R. M. (2011). An experimental system for measuring the credibility of news content in twitter. International Journal of Web Information Systems, 7(2):130–151.
    Alkhodair, S. A., Ding, S. H., Fung, B. C., and Liu, J. (in press). Detecting breaking news rumors of emerging topics in social media. Information Processing and Management. Retrieved from https://doi.org/10.1016/j.ipm.2019.02.016.
    Allcott, H. and Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31(2):211–236.
    Bond, G. D., Holman, R. D., Eggert, J.-a. L., Speller, L. F., Garcia, O. N., Mejia, S. C., Mcinnes, K. W., Ceniceros, E. C., and Rustige, R. (2017). ‘Lyin’ Ted’, ‘Crooked Hillary’, and ‘Deceptive Donald’: Language of Lies in the 2016 US Presidential Debates. Applied Cognitive Psychology, 31(6):668–677.
    Canini, K. R. and Pirolli, P. L. (2011). Finding Credible Information Sources in Social Networks Based on Content and Social Structure. In 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing, pages 1–8. IEEE.
    Cho, K., Van Merrie¨nboer, B., Bahdanau, D., and Bengio, Y. (2014). On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259.
    Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., and Kuksa, P. (2011). Natural language processing (almost) from scratch. Journal of machine learning research, 12(Aug):2493–2537.
    Dozat, T. and Manning, C. D. (2016). Deep biaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734.
    Gao, Y., Li, X., Li, J., Gao, Y., and Philip, S. Y. (2019). Info-trust: A multi-criteria and adaptive trustworthiness calculation mechanism for information sources. IEEE Access, 7:13999–14012.
    Jebran, K. and Sungchang, L. (2019). Implicit User Trust Modeling Based on User Attributes and Behavior in Online Social Networks. IEEE Access, 7:142826–142842.
    Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
    Li, Q., Hu, Q., Lu, Y., Yang, Y., and Cheng, J. (2019a). Multi-level word features based on cnn for fake news detection in cultural communication. Personal and Ubiquitous Computing, pages 1–14.
    Li, Q., Zhang, Q., and Si, L. (2019b). Rumor detection by exploiting user credibility information, attention and multi-task learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1173–1179.
    Lim, W. Y., Lee, M. L., and Hsu, W. (2017). iFACT: An Interactive Framework to Assess Claims from Tweets. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 787–796. ACM.
    Liu, G., Chen, Q., Yang, Q., Zhu, B., Wang, H., and Wang, W. (2017). Opinionwalk: An efficient solution to massive trust assessment in online social networks. In IEEE INFOCOM 2017-IEEE Conference on Computer Communications, pages 1–9. IEEE.
    Liu, G., Yang, Q., Wang, H., Lin, X., and Wittie, M. P. (2014). Assessment of multi-hop interpersonal trust in social networks by three-valued subjective logic. In IEEE INFOCOM 2014-IEEE Conference on Computer Communications, pages 1698–1706. IEEE.
    Liu, G., Yang, Q., Wang, H., and Liu, A. X. (2019). Three-valued subjective logic: A model for trust assessment in online social networks. IEEE Transactions on Dependable and Secure Computing.
    Ma, J., Gao, W., Joty, S., and Wong, K.-F. (2019). Sentence-level evidence embedding for claim verification with hierarchical attention networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2561–2571.
    Ma, J., Gao, W., and Wong, K.-F. (2018). Detect rumor and stance jointly by neural multi-task learning. In Companion Proceedings of the The Web Conference 2018, pages 585–593. International World Wide Web Conferences Steering Committee.
    Mukherjee, S. and Weikum, G. (2015). Leveraging joint interactions for credibility analysis in news communities. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 353–362. ACM Press.
    Pennington, J., Socher, R., and Manning, C. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543.
    Peter, F. (2013). ‘bogus’ ap tweet about explosion at the white house wipes billions off us markets. The Telegraph, Finance/Market. Washington.
    Popat, K., Mukherjee, S., Yates, A., and Weikum, G. (2018). Declare: Debunking fake news and false claims using evidence-aware deep learning. arXiv preprint arXiv:1809.06416.
    Rashkin, H., Choi, E., Jang, J. Y., Volkova, S., and Choi, Y. (2017). Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 2931–2937.
    Ruchansky, N., Seo, S., and Liu, Y. (2017). Csi: A hybrid deep model for fake news detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 797–806. ACM.
    Sethi, R. J. (2017). Crowdsourcing the verification of fake news and alternative facts. In Proceedings of the 28th ACM Conference on Hypertext and Social Media, pages 315–316. ACM.
    Silverman, C. (2016). This analysis shows how viral fake election news stories outperformed real news on facebook. BuzzFeed News, 16.
    Volkova, S., Shaffer, K., Jang, J. Y., and Hodas, N. (2017). Separating facts from fiction: Linguistic models to classify suspicious and trusted news posts on twitter. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 647–653.
    Wang, W. Y. (2017).” liar, liar pants on fire”: A new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648.
    Young, J. O. (2018). The coherence theory of truth. In Zalta, E. N., editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, fall 2018 edition.
    Zaremba, W., Sutskever, I., and Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
    Zhang, C., Gupta, A., Kauten, C., Deokar, A. V., and Qin, X. (2019). Detecting fake news for reducing misinformation risks using analytics approaches. European Journal of Operational Research, 279(3):1036–1052.
    Zhao, L., Hua, T., Lu, C.-T., and Chen, R. (2016). A topic-focused trust model for twitter. Computer Communications, 76:1–11.
    Zhou, X. and Zafarani, R. (2018). Fake News: A Survey of Research, Detection Methods, and Opportunities. arXiv preprint arXiv:1812.00315.

    無法下載圖示 校內:2025-07-01公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE