簡易檢索 / 詳目顯示

研究生: 曹何謙
Tsao, Ho-Chien
論文名稱: 基於BERT與增強式學習的多文件閱讀理解模型-建構健康知識領域問答系統
Multi-Document Reading Comprehension Based on BERT and Reinforcement Learning – Building a Health Knowledge Question Answering System
指導教授: 蔣榮先
Chiang, Jung-Hsien
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 26
中文關鍵詞: 問答系統機器閱讀理解多文件閱讀理解增強式學習預訓練語言模型
外文關鍵詞: Question Answering System, Machine Reading Comprehension, Multi-Document Reading Comprehension, Reinforcement Learning, BERT
相關次數: 點閱:153下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來透過文件進行問答已經成為一個熱門的研究議題。機器閱讀理解是基於文件的問答系統中一個核心的部分,其目的為從問題相關的文件或段落中,找到答案出現的地方。為了要更貼近將機器閱讀理解的技術整合在問答系統的情況,許多基於多文件閱讀理解的研究被提出。在多文件閱讀理解中,需要從數篇文件內找到問題相關的答案,而非從某一個已知與問題相關的段落中找答案。
    多文件閱讀理解一個常見的方法是會將預測答案分成兩個步驟,首先從先從數篇文件內挑選出可能包含答案的段落,再從這些被選出的段落中抽取出答案進行預測。然而這樣的方法遇到的問題是當第一個步驟挑出的許多錯誤的段落時,就會導致第二個步驟無法正確的抽取答案。在本研究中,我們使用了增強式學習來解決這樣的問題。另一個將機器閱讀理解的模型應用於問答系統會遇到的困難點為缺乏在應用領域上的人工標註訓練資料。用來訓練模型的資料分布與應用領域上資料分布的差異導致模型的表現下降。為了減緩這樣的問題我們基於BERT來建構了兩個模型:段落排序模型和答案抽取模型。並且使用它們來建構健康知識領域的問答系統。
    為了驗證我們的方法,我們百度的DuReader資料集和自行收集的健康知識領域閱讀理解資料集進行實驗評估。實驗的結果顯示本篇提出的答案抽取模型可以改善應用領域上表現下降的問題,以及使用增強式學習的訓練方法可以增進段落排序模型的表現。

    Question answering from documents has become a popular research topic in recent years. Machine reading comprehension (MRC) is one of core parts in document-based question answering systems which’s goal is finding the answer from texts related to the question. In order to simulate conditions when integrating machine reading comprehension models to question answering systems, many works based on multi-document reading comprehension setting have been proposed. The task of multi-document reading comprehension is to find answers from a set of documents instead of from a related paragraph which is known in advanced.
    A common approach of multi-document reading comprehension is the pipeline approach, which selects paragraphs probably contains answer firstly, then extract the answer from selected paragraph. A problem of the pipeline approach is error propagation: mistakes made by the step of selecting paragraphs leads it’s hard to extract correct answers. We propose a reinforcement learning method to resolve the error propagation problem. Another challenge when applying machine reading comprehension models to question answering systems is lacking training data on the application domain. The gap between the domain of training data and the domain of application incurs machine reading comprehension models can’t predict appropriate answers. To reduce the performance degradation of machine reading comprehension models in the application domain, we propose our models for machine reading comprehension the BERT Ranker and the BERT Reader. Based on them, we build a question answering system on health knowledge domain.
    To verify our methods, we conduct experiments on the benchmark dataset DuReader and health knowledge machine reading comprehension dataset collected by ourselves. The experimental results show that BERT Reader can alleviate the performance degradation on the application domain and our reinforcement learning method boosts the performance of BERT Ranker.

    中文摘要 i Abstract ii Acknowledgement iv Contents v List of Tables vii List of Figures viii Chapter 1 Introduction 1 1.1 Background 1 1.2 Research Objectives 3 1.3 Thesis Organization 3 Chapter 2 Related Work 4 2.1 Document-Based Question Answering 4 2.2 Pre-trained Language Models 5 Chapter 3 Multi-Document Reading Comprehension 6 3.1 Data Preprocessing 6 3.1.1 Paragraph Ranking Preprocessing 7 3.1.2 Answer Extraction Preprocessing 8 3.2 Model Architecture 8 3.2.1 BERT Ranker 9 3.2.2 BERT Reader 10 3.3 Answer Prediction 11 3.3.1 Paragraph Selection 11 3.3.2 Answer Extraction 12 3.4 Model Training 12 3.4.1 Supervise Learning 13 3.4.2 Reinforcement Learning 14 Chapter 4 Health Knowledge Question Answering System 16 4.1 Database Building 16 4.2 Question Parsing 17 4.3 Document Retrieval 17 Chapter 5 Experiments 19 5.1 Datasets 19 5.1.1 Data Collection 19 5.2 Evaluation Metrics 20 5.3 Implementation 20 5.3.1 Baselines 21 5.4 Experimental Results 21 5.4.1 Multi-Document Reading Comprehension 21 5.4.2 Paragraph Ranking 23 Chapter 6 Conclusion and Future Works 25 6.1 Conclusion 25 6.2 Future Works 25 References 27

    [1] A. W.Yu et al., “QaNet: Combining local convolution with global self-attention for reading comprehension”, 6th Int. Conf. Learn. Represent. ICLR 2018 - Conf. Track Proc., no. 1, pp. 1–16, 2018.
    [2] A. Radford and T. Salimans, “Improving Language Understanding by Generative Pre-Training” OpenAI, 2018.
    [3] A. Trischler et al., “NewsQA: A Machine Comprehension Dataset”, Workshop on Representation Learning for NLP, 2017.
    [4] A. Vaswani, “Attention Is All You Need”, Nips, 2017.
    [5] B. Kratzwald and S. Feuerriegel, “Adaptive Document Retrieval for Deep Question Answering”, arXiv, 2018.
    [6] C. Lin, “ROUGE: A Package for Automatic Evaluation of Summaries”, Text Summarization Branches Out, pp. 74–81, 2004.
    [7] J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”, arXiv, 2018.
    [8] K. Papineni, S. Roukos, T. Ward and W. Zhu, “BLEU”, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, 2001.
    [9] M. A. Calijorne Soares and F.S. Parreiras, “A literature review on question answering techniques, paradigms and systems”, J. King Saud Univ. - Comput. Inf. Sci., 2018.
    [10] M.Hu, Y.Peng, Z.Huang, and D.Li, “Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading Comprehension”, arXiv, 2019.
    [11] M.Peters et al., “Deep Contextualized Word Representations”, arXiv, 2018.
    [12] M. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi, “Bidirectional Attention Flow for Machine Comprehension” arXiv, 2016.
    [13] M. Yan et al., “A Deep Cascade Model for Multi-Document Reading Comprehension” Proc. AAAI Conf. Artif. Intell., vol. 33, pp. 7354–7361, 2019.
    [14] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “SQuad: 100,000+ questions for machine comprehension of text”, EMNLP 2016, pp. 2383–2392, 2016.
    [15] R. Mihalcea and P. Tarau, “TextRank: Bringing Order into Text”, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 2004.
    [16] R. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning”, Machine Learning, vol. 8, no. 3-4, pp. 229-256, 1992.
    [17] S.Tellex, B.Katz, A.Fernandes, and G.Marton, “Quantitative Evaluation of Passage Retrieval Algorithms for Question Answering”, SIGIR Forum (ACM Spec. Interes. Gr. Inf. Retrieval), 2003.
    [18] S. Wang and J. Jiang, “Machine Comprehension Using Match-LSTM and Answer Pointer”, arXiv, 2016.
    [19] S. Wang et al., “R 3 : Reinforced ranker-reader for open-domain question answering” 32nd AAAI Conf. Artif. Intell. AAAI 2018, pp. 5981–5988, 2018.
    [20] T. Nguyen et al., “MS MARCO: A human generated Machine Reading Comprehension dataset”, Nips 2016, pp. 1–11, 2016.
    [21] W. He et al., “DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications”, arXiv, 2017.
    [22] W. Wang, N. Yang, F. Wei, B. Chang, and M. Zhou, “R-NET: Machine Reading Comprehension with Self-matching Networks,” Acl 2017, pp. 1–11, 2017.
    [23] W. Yang et al., “End-to-End Open-Domain Question Answering with bertserini”, arXiv, 2019.
    [24] Y.Cui et al., “Pre-Training with Whole Word Masking for Chinese BERT”, arXiv, 2019.

    無法下載圖示 校內:2021-08-01公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE