| 研究生: |
普皓群 Pu, Hao-Chun |
|---|---|
| 論文名稱: |
基於深度學習之心智圖自動產生方法與技術研發:以數位閱讀與寫作能力培養之應用為例 On deep learning-based method and technology for automatic mind map generation: development of digital reading and writing ability as an example |
| 指導教授: |
陳裕民
Chen, Yuh-Min |
| 共同指導教授: |
朱慧娟
Chu, Hui-Chuan |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 製造資訊與系統研究所 Institute of Manufacturing Information and Systems |
| 論文出版年: | 2021 |
| 畢業學年度: | 109 |
| 語文別: | 中文 |
| 論文頁數: | 92 |
| 中文關鍵詞: | 機器學習 、深度學習 、心智圖 、心智圖產生 、數位學習 、數位閱讀寫作 |
| 外文關鍵詞: | Machine learning, Deep learning, Mind map, Mind map generation |
| 相關次數: | 點閱:177 下載:3 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
心智圖是一種用於整理概念的圖像,被廣泛用於語文教學領域,並有多項研究證明,將心智圖應用於教學,能提升學生不同面向的語文能力。但是,將心智圖應用於教學,須為大量的教學內容準備心智圖,造成教師需耗費大量時間。
鑒於心智圖教學之需求,本研究運用近年熱門的深度學習技術,設計並開發「基於深度學習之心智圖自動產生方法」,包括「命名實體辨識」、「實體關係擷取」、「重點實體擷取」與「結構視覺化」等四個步驟。本研究設計三個深度學習模型,用來解決「命名實體辨識」、「實體關係擷取」與「重點實體擷取」步驟遇到的分類問題。最後由「結構視覺化」演算法,整合深度學習模型之預測結果資料,將資料視覺化,產生數位心智圖。
為評估「基於深度學習之心智圖自動產生方法」的有效性,本研究設計「深度學習模型評估流程」與「基於深度學習之心智圖產生方法評估流程」。「深度學習模型評估流程」評估三個不同的BERT預訓練模型,在命名實體辨識、實體關係擷取與重點實體擷取任務中的表現,並將表現最佳之模型應用於「基於深度學習之心智圖產生方法評估流程」。「基於深度學習之心智圖產生方法評估流程」分為兩個評估情境「人工標註之心智圖」與「自動產生之心智圖」,在命名實體辨識、實體關係擷取與重點實體擷取步驟,分別透過人工標註與深度學習模型產生資料,將此資料經過結構視覺化步驟,繪製成心智圖。最後,將人工標註與深度學習模型產生之心智圖混合,透過人工評分的方法進行實驗。實驗結果顯示,「基於深度學習之心智圖自動產生方法」在人工標註下能夠產生足夠品質的心智圖,證明此方法之有效性。透過深度學習模型產生之心智圖品質則較不穩定,少部分可以達到與人工標註相同的品質,但大部分自動產生心智圖之品質則較為不足。
為驗證數位心智圖應用於教學的有效性,本研究設計「以心智圖應用為基之數位閱讀與寫作能力培養模式」,並以此模式開發「數位讀寫學習平台」。經實驗驗證,可以提升學生閱讀時的專注度與作文的豐富度。
Mind maps are often used in the field of language teaching in schools and are effective in improving students' reading and writing skills. However, the preparation of such teaching materials is quite a huge burden to teachers.
To reduce such burden on teachers, this study uses deep learning techniques to design the "Deep Learning-based Mind Map Generation Method" for automatic mind map generation. The method includes four steps: "Named Entity Recognition", "Entity Relationship Extraction", "Key Entity Extraction" and "Structural Visualization". In this study, three deep learning models are proposed to solve the classification problems in the steps of "Named Entity Recognition", "Entity Relationship Extraction" and "Key Entity Extraction". Finally, the "Structural Visualization" algorithm organizes the prediction result data of deep learning models, visualizes the data, and generates digital mind maps.
To evaluate the effectiveness of the "Deep Learning-based Mind Map Generation Method", the experimental design of this study includes two scenarios: mind maps generated by manually labelled data, and mind maps generated automatically. After manual evaluation, the evaluation result of mind maps generated by manually labelled data shows that the method could generate a mind map with sufficient quality. The evaluation result of mind maps generated automatically shows that a small number of the mind maps have sufficient quality, but most of the mind maps have poor quality.
Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural Machine Translation by Jointly Learning to Align and Translate. arXiv:1409.0473. Retrieved from: https://ui.adsabs.harvard.edu/abs/2014arXiv1409.0473B
Baldini Soares, L., FitzGerald, N., Ling, J., & Kwiatkowski, T. (2019). Matching the Blanks: Distributional Similarity for Relation Learning. arXiv:1906.03158. Retrieved from: https://ui.adsabs.harvard.edu/abs/2019arXiv190603158B
Brown, A. L., & Day, J. D. (1983). Macrorules for summarizing texts: the development of expertise. Journal of Verbal Learning and Verbal Behavior, 22(1), 1-14. doi: https://doi.org/10.1016/S0022-5371(83)80002-4
Cui, Y., Che, W., Liu, T., Qin, B., Wang, S., & Hu, G. (2020). Revisiting Pre-Trained Models for Chinese Natural Language Processing. arXiv:2004.13922. Retrieved from: https://ui.adsabs.harvard.edu/abs/2020arXiv200413922C
Cui, Y., Che, W., Liu, T., Qin, B., Yang, Z., Wang, S., & Hu, G. (2019). Pre-Training with Whole Word Masking for Chinese BERT. arXiv:1906.08101. Retrieved from: https://ui.adsabs.harvard.edu/abs/2019arXiv190608101C
Deng, L. (2014). A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Transactions on Signal and Information Processing, 3, e2. doi: 10.1017/atsip.2013.9
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805. Retrieved from: https://ui.adsabs.harvard.edu/abs/2018arXiv181004805D
Dingler, T., Kern, D., Angerbauer, K., & Schmidt, A. (2017). Text Priming - Effects of Text Visualizations on Readers Prior to Reading, Cham.
Hwang, G.-J., Chen, M.-R. A., Sung, H.-Y., & Lin, M.-H. (2019). Effects of integrating a concept mapping-based summarization strategy into flipped learning on students’ reading performances and perceptions in Chinese courses. British Journal of Educational Technology, 50(5), 2703-2719. doi: https://doi.org/10.1111/bjet.12708
Li, P.-H., Fu, T.-J., & Ma, W.-Y. (2019). Why Attention? Analyze BiLSTM Deficiency and Its Remedies in the Case of NER. arXiv:1908.11046. Retrieved from: https://ui.adsabs.harvard.edu/abs/2019arXiv190811046L
Li, Y., & Yang, T. (2018). Word Embedding for Understanding Natural Language: A Survey. In S. Srinivasan (Ed.), Guide to Big Data Applications (pp. 83-104). Cham: Springer International Publishing.
Mei-Mei Wu, 吳. (2004). 數位學習現在與未來發展.
Mikolov, T., Karafiát, M., Burget, L., Černocký, J., & Khudanpur, S. (2010). Recurrent neural network based language model. Paper presented at the Eleventh annual conference of the international speech communication association.
Nurrokhim, M. F., Riza, L. S., & Rasim. (2019). Generating mind map from an article using machine learning. Journal of Physics: Conference Series, 1280, 032023. doi: 10.1088/1742-6596/1280/3/032023
Otter, D. W., Medina, J. R., & Kalita, J. K. (2021). A Survey of the Usages of Deep Learning for Natural Language Processing. IEEE Transactions on Neural Networks and Learning Systems, 32(2), 604-624. doi: 10.1109/TNNLS.2020.2979670
Schuster, M., & Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11), 2673-2681. doi: 10.1109/78.650093
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . Polosukhin, I. (2017). Attention Is All You Need. arXiv:1706.03762. Retrieved from: https://ui.adsabs.harvard.edu/abs/2017arXiv170603762V
Yulianto, R., & Mariyah, S. (2017, 23-24 Oct. 2017). Building automatic mind map generator for natural disaster news in Bahasa Indonesia. Paper presented at the 2017 International Conference on Information Technology Systems and Innovation (ICITSI).
王美宜. (2010). 心智圖法教學運用於國中九年級原住民學生閱讀理解能力之研究. 國立臺灣師範大學. Available from Airiti AiritiLibrary database. (2010年)
王開府. (2008). 心智圖與概念模組在語文閱讀與寫作思考教學之運用. [Applying Mind Map and "Concept Model" to the Teaching of Reading and Writing in Thinking Curriculum of Language]. 國文學報(43), 263-296. doi: 10.6239/boc.200806_(43).07
宋曜廷, 黃嶸生, 蘇宜芬, & 張國恩. (2002). 具多重策略的閱讀理解輔助系統之設計與應用. 第四屆華人心理學家學術研討會暨第六屆華人心理與行為科際學術研討會.
林達森. (2003). 概念圖的理論基礎與運用實務. [On Concept Map: Its Theory and Practical Use in Education]. 花蓮師院學報(教育類)(17), 107-132.
柯華葳, 張郁雯, 詹益綾, & 丘嘉慧. (2017). PIRLS 2016 臺灣四年級學生閱. 讀素養國家報告. 桃園市:國立中央大學.
馬于婷, 黃淑賢, & 施如齡. (2018). 數位心智圖導入數位說故事對學童5C能力之學習成效分析. [Learning Effectiveness of Digital Mind Mapping Into Digital Storytelling for Elementary School Students' 5C Competencies]. 數位學習科技期刊, 10(2), 31-57. doi: 10.3966/2071260x2018041002002
教育部. (2014). 十二年國民基本教育課程綱要:總綱.
教育部. (2018). 十二年國民基本教育課程綱要國民中小學暨普通型高級中等學校:語文領域-國語文.
錢昭君, & 張世彗. (2010). 心智圖法寫作教學方案對國小學生創造力及寫作表現之影響. [The Effects of Mind Mapping Writing Teaching Program on Creativity and Writing Performance for Elementary Students]. 特殊教育學報(32), 79-99. doi: 10.6768/jse.201012.0079
蘇怡文, & 高振耀. (2012). 心像的魔法—心智圖與創造寫作. 中華民國特殊教育學會年刊(101年度), 87-102. doi: 10.6379/ajse.201212.0087