簡易檢索 / 詳目顯示

研究生: 程家浩
Cheng, Jia-Hao
論文名稱: 整合語意標籤與BART模型於文法錯誤之修正
Incorporating Semantic Role Label into BART for Grammatical Error Correction
指導教授: 王惠嘉
Wang, Hei-Chia
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理研究所
Institute of Information Management
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 76
中文關鍵詞: 寫作輔助系統文法錯誤修正預訓練語言模型語意角色標記
外文關鍵詞: Writing Support System, Grammatical Error Correction, Pre-trained Language Model, Semantic Role Labeling
相關次數: 點閱:63下載:6
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 寫作技能是一項人們在進行溝通、交流想法與傳遞知識時的重要工具,若文章中時常出現拼寫、句法與時態等文法上的錯誤,則容易讓人在閱讀時發生不解其意或誤解的情況發生,因此,對文本內容的修訂是一個不可或缺的過程。然而,對於寫作者而言,寫作者仍需要針對文本反覆檢查有無重大紕漏;另一方面,對於批改者而言,人為的批改可能會發生批改上的疏忽或遺漏的狀況。因此,一個自動化的文法錯誤修正(Grammatical Error Correction, GEC)系統因而被提出。然而,根據先前文獻所述,目前GEC系統在文本理解上的不足,導致對於content word errors的修正仍有侷限;此外,GEC領域缺乏大量的標記資料,也進一步使得GEC方法難以有更佳的表現。

    近年來,預訓練語言模型已經在各個NLP領域大放異彩,這是由於預訓練語言模型能夠輸出具有上下文語意的語言表達式(representation),並且透過遷移式學習(transfer learning)可以讓低資源任務提升效益。因此,本研究提出一個基於預訓練語言模型BART (Bidirectional and Auto-Regressive Transformers)的方法SE-GEC (Semantic-Enhanced Grammatical Error Correction),並且使用資料擴增(Data Augmentation)的技術來增加可用的平行資料。為了更進一步加強模型的語意理解,本研究額外加入語意角色標記(Semantic Role Label, SRL)作為Attention機制,使模型更了解上下文的語意,以加強模型整體文法錯誤修正的能力,同時改善content word errors修正上的不足。

    本研究共設計了五個實驗以驗證模型的有效性,於實驗結果發現,本研究所使用的資料擴增方法皆能提升模型的表現,並且不同方法間的使用能夠使生成的文法類型更多樣。另一方面,加入SRL標記能夠加強模型對語意的理解,藉此具有實質意思的content word errors得以被進一步的修正,證實本研究加入額外語意資訊的有效。而在CoNLL-2014測試資料集的評估中,SE-GEC模型在自動評估指標上亦勝過過去其他GEC模型,顯示SE-GEC模型整體架構設計上的有用。

    Writing is an important medium for humans to engage in communicating, exchanging ideas, and conveying knowledge. Once there are many lexical, semantic or syntax errors in an article, it will be hard for the reader to comprehend the article effectively and clearly. As a result, the process of revising the manuscript is indispensable for a writer. However, for writers, their ability to find out and revise grammatical errors is limited by their current level of language proficiency. On the other hand, for revisors, they may be negligent in pointing out writers’ grammatical errors on account of manual correction. Because of the above difficulties, a grammatical error correction (GEC) system is increasingly necessary. Nevertheless, according to the previous literature, the current GEC systems lack the deeper understanding of the text so that they still struggle with correcting content word errors. In addition, the training data is insufficient that the GEC system suffer from the limitation in achieve better performance.

    Recently, contextual pre-trained language models have been shown effective in a wide range of natural language processing (NLP) tasks. One reason is that pre-trained language models build rich and context-aware representations of text. Meanwhile, transfer learning with pre-trained language model even enables the low-resource NLP tasks to benefit. Therefore, we introduce semantic-enhanced grammatical error correction (SE-GEC) that is based on a pre-trained language model, bidirectional and auto-regressive transformers (BART). Besides BART, we employ three data augmentation methods to expand the available labeled data, but also incorporate semantic role label (SRL) to make our model absorb explicit contextual semantic information via attention.

    SE-GEC is evaluated on CoNLL-2014 benchmark. The results show that SE-GEC achieves an improvement in correcting content word errors and overall error types. It indicates that three data augmentation methods provide effective synthetic data, and SE-GEC is able to capture external knowledge from SRL attention.

    第1章 緒論 1 1.1 研究背景與動機 1 1.2 目的 8 1.3 研究範圍與限制 9 1.4 研究流程 9 1.5 論文大綱 10 第2章 文獻探討 12 2.1 預訓練語言模型(Pre-trained Language Model) 12 2.1.1 ELMo (Embeddings from Language Models) 13 2.1.2 BERT (Bidirectional Encoder Representations from Transformers) 14 2.1.2.1 預訓練BERT 16 2.1.2.2 微調BERT 16 2.1.3 BART (Bidirectional and Auto-Regressive Transformers) 16 2.2 語意角色標記(Semantic Role Label, SRL) 18 2.2.1 Span-based SRL 19 2.2.2 Dependency-based SRL 20 2.3 文法錯誤修正(Grammatical Error Correction) 21 2.3.1 統計式機器翻譯(Statistical Machine Translation, SMT) 22 2.3.2 類神經機器翻譯(Neural Machine Translation, NMT) 23 2.4 小結 26 第3章 研究方法 27 3.1 研究架構 27 3.2 資料前處理模組 29 3.3 資料擴增模組 32 3.3.1 從維基百科修訂歷史萃取 32 3.3.2 反向翻譯 34 3.3.3 句子變形 35 3.4 語意角色標記模組 36 3.5 文法錯誤修正模組 38 3.5.1 文法正確句生成階段 39 3.5.1.1 句子生成 40 3.5.1.2 SRL Attention機制 42 3.5.1.3 Copying機制 43 3.5.2 模型集成 44 3.6 小結 44 第4章 系統建置與驗證 46 4.1 系統環境建置 46 4.2 實驗方法 46 4.2.1 資料來源 46 4.2.1.1 資料擴增資料 46 4.2.1.2 訓練資料 47 4.2.1.3 測試資料 49 4.2.2 實驗設計 49 4.2.3 評估指標 50 4.3 參數設定 52 4.3.1 參數一:輸入句子的長度 52 4.3.2 參數二:文法錯誤修正模組訓練參數 53 4.4 實驗結果與分析 53 4.4.1 實驗一:分析不同資料擴增方法的效果 53 4.4.2 實驗二:探討SRL加入模型對修正content word errors的效益 57 4.4.3 實驗三:評估SRL與Copying機制對整體文法錯誤修正的影響 59 4.4.4 實驗四:使用自動評估指標與其他文法錯誤修正模型進行比較 60 4.4.5 實驗五:案例分析──與現有文法檢測工具進行比較 62 第5章 結論與未來方向 64 5.1 研究成果 64 5.2 未來研究方向 66 參考文獻 67 附錄 75

    Al-Ahdal, A. (2020). Using Computer Software as a Tool of Error Analysis: Giving Efl Teachers and Learners a Much-Needed Impetus. International Journal of Innovation, Creativity and Change, 12(2).
    Allen, L. K., Jacovina, M. E., Johnson, A. C., McNamara, D. S., & Roscoe, R. D. (2016). Toward Revision-Sensitive Feedback in Automated Writing Evaluation. Grantee Submission.
    Axelrod, A., He, X., & Gao, J. (2011). Domain Adaptation Via Pseudo in-Domain Data Selection. Paper presented at the Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, Edinburgh, Scotland, UK.
    Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural Machine Translation by Jointly Learning to Align and Translate. arXiv preprint arXiv:1409.0473.
    Baker, C. F., Fillmore, C. J., & Lowe, J. B. (1998). The Berkeley Framenet Project. Paper presented at the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, Montreal, Quebec, Canada.
    Bojar, O. e., Federmann, C., Fishel, M., Graham, Y., Haddow, B., Koehn, P., & Monz, C. (2018). Findings of the 2018 Conference on Machine Translation (Wmt18). Paper presented at the Proceedings of the Third Conference on Machine Translation: Shared Task Papers, Belgium, Brussels.
    Brown, P. F., Della Pietra, S. A., Della Pietra, V. J., & Mercer, R. L. (1993). The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational linguistics, 19(2), 263-311.
    Bryant, C., Felice, M., Andersen, Ø. E., & Briscoe, T. (2019). The Bea-2019 Shared Task on Grammatical Error Correction. Paper presented at the Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, Florence, Italy.
    Bryant, C., Felice, M., & Briscoe, T. (2017). Automatic Annotation and Evaluation of Error Types for Grammatical Error Correction. Paper presented at the Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada.
    Buck, C., Heafield, K., & van Ooyen, B. (2014). N-Gram Counts and Language Models from the Common Crawl. Paper presented at the Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), Reykjavik, Iceland.
    Cai, J., He, S., Li, Z., & Zhao, H. (2018). A Full End-to-End Semantic Role Labeler, Syntactic-Agnostic over Syntactic-Aware? Paper presented at the Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, New Mexico, USA.
    Carreras, X., & Màrquez, L. (2004). Introduction to the Conll-2004 Shared Task: Semantic Role Labeling. Paper presented at the Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004, Boston, Massachusetts, USA.
    Carreras, X., & Màrquez, L. (2005). Introduction to the Conll-2005 Shared Task: Semantic Role Labeling. Paper presented at the Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), Ann Arbor, Michigan.
    Chelba, C., Mikolov, T., Schuster, M., Ge, Q., Brants, T., Koehn, P., & Robinson, T. (2013). One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling. arXiv preprint arXiv:1312.3005.
    Cho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning Phrase Representations Using Rnn Encoder?Decoder for Statistical Machine Translation. Paper presented at the Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar.
    Choe, Y. J., Ham, J., Park, K., & Yoon, Y. (2019). A Neural Grammatical Error Correction System Built on Better Pre-Training and Sequential Transfer Learning. Paper presented at the Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, Florence, Italy.
    Chollampatt, S., & Ng, H. T. (2018). A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction. arXiv preprint arXiv:1801.08831.
    Chollampatt, S., Wang, W., & Ng, H. T. (2019). Cross-Sentence Grammatical Error Correction. Paper presented at the Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy.
    Clark, K., Khandelwal, U., Levy, O., & Manning, C. D. (2019). What Does Bert Look At? An Analysis of Bert's Attention. arXiv preprint arXiv:1906.04341.
    Dahlmeier, D., & Ng, H. T. (2012). Better Evaluation for Grammatical Error Correction. Paper presented at the Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Montréal, Canada.
    Dahlmeier, D., Ng, H. T., & Wu, S. M. (2013). Building a Large Annotated Corpus of Learner English: The Nus Corpus of Learner English. Paper presented at the Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications, Atlanta, Georgia.
    Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805.
    Felice, M., Yuan, Z., Andersen, Ø. E., Yannakoudakis, H., & Kochmar, E. (2014). Grammatical Error Correction Using Hybrid Systems and Type Filtering. Paper presented at the Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, Baltimore, Maryland.
    Ge, T., Wei, F., & Zhou, M. (2018). Reaching Human-Level Performance in Automatic Grammatical Error Correction: An Empirical Study. arXiv preprint arXiv:1807.01270.
    Grundkiewicz, R., & Junczys-Dowmunt, M. (2018). Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation. Paper presented at the Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana.
    Hajič, J., Ciaramita, M., Johansson, R., Kawahara, D., Martí, M. A., Màrquez, L., . . . Zhang, Y. (2009). The Conll-2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages. Paper presented at the Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, Boulder, Colorado.
    Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural computation, 9(8), 1735-1780.
    Ji, J., Wang, Q., Toutanova, K., Gong, Y., Truong, S., & Gao, J. (2017). A Nested Attention Neural Hybrid Model for Grammatical Error Correction. arXiv preprint arXiv:1707.02026.
    Junczys-Dowmunt, M., & Grundkiewicz, R. (2014). The Amu System in the Conll-2014 Shared Task: Grammatical Error Correction by Data-Intensive and Feature-Rich Statistical Machine Translation. Paper presented at the Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, Baltimore, Maryland.
    Junczys-Dowmunt, M., & Grundkiewicz, R. (2016). Phrase-Based Machine Translation Is State-of-the-Art for Automatic Grammatical Error Correction. arXiv preprint arXiv:1605.06353.
    Junczys-Dowmunt, M., Grundkiewicz, R., Guha, S., & Heafield, K. (2018). Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task. Paper presented at the Proceedings of NAACL-HLT 2018, New Orleans, Louisiana.
    Kantrowitz, M., & Baluja, S. (2003). Method for Rule-Based Correction of Spelling and Grammar Errors. U.S. Patent No. 6,618,697.
    Kiyono, S., Suzuki, J., Mita, M., Mizumoto, T., & Inui, K. (2019). An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction. arXiv preprint arXiv:1909.00502.
    Koehn, P., & Knowles, R. (2017). Six Challenges for Neural Machine Translation. arXiv preprint arXiv:1706.03872.
    Koehn, P., Och, F. J., & Marcu, D. (2003). Statistical Phrase-Based Translation. Paper presented at the Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, Edmonton, Alberta, Canada.
    Kuncoro, A., Dyer, C., Hale, J., Yogatama, D., Clark, S., & Blunsom, P. (2018). Lstms Can Learn Syntax-Sensitive Dependencies Well, but Modeling Structure Makes Them Better. Paper presented at the Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia.
    Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., . . . Zettlemoyer, L. (2019). Bart: Denoising Sequence-to-Sequence Pre-Training for Natural Language Generation, Translation, and Comprehension. arXiv preprint arXiv:1910.13461.
    Li, Z., He, S., Zhao, H., Zhang, Y., Zhang, Z., Zhou, X., & Zhou, X. (2019). Dependency or Span, End-to-End Uniform Semantic Role Labeling. Paper presented at the Proceedings of the AAAI Conference on Artificial Intelligence, Hawaii, USA.
    Lichtarge, J., Alberti, C., & Kumar, S. (2020). Data Weighted Training Strategies for Grammatical Error Correction. arXiv preprint arXiv:2008.02976.
    Lichtarge, J., Alberti, C., Kumar, S., Shazeer, N., & Parmar, N. (2018). Weakly Supervised Grammatical Error Correction Using Iterative Decoding. arXiv preprint arXiv:1811.01710.
    Lichtarge, J., Alberti, C., Kumar, S., Shazeer, N., Parmar, N., & Tong, S. (2019). Corpora Generation for Grammatical Error Correction. arXiv preprint arXiv:1904.05780.
    Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., . . . Stoyanov, V. (2019). Roberta: A Robustly Optimized Bert Pretraining Approach. arXiv preprint arXiv:1907.11692.
    Macdonald, N. H. (1983). Human Factors and Behavioral Science: The Unix™ Writer's Workbench Software: Rationale and Design. Bell System Technical Journal, 62(6), 1891-1908.
    Marcheggiani, D., Frolov, A., & Titov, I. (2017). A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-Based Semantic Role Labeling. Paper presented at the Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada.
    Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781.
    Mizumoto, T., Komachi, M., Nagata, M., & Matsumoto, Y. (2011). Mining Revision Log of Language Learning Sns for Automated Japanese Error Correction of Second Language Learners. Paper presented at the Proceedings of 5th International Joint Conference on Natural Language Processing, Chiang Mai, Thailand.
    Mizumoto, T., & Matsumoto, Y. (2016). Discriminative Reranking for Grammatical Error Correction with Statistical Machine Translation. Paper presented at the Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, California.
    Naber, D. (2003). A Rule-Based Style and Grammar Checker. Munich, Germany: GRIN Verlag.
    Napoles, C., Sakaguchi, K., Post, M., & Tetreault, J. (2015). Ground Truth for Grammatical Error Correction Metrics. Paper presented at the Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Beijing, China.
    Ng, H. T., Wu, S. M., Briscoe, T., Hadiwinoto, C., Susanto, R. H., & Bryant, C. (2014). The Conll-2014 Shared Task on Grammatical Error Correction. Paper presented at the Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, Baltimore, Maryland.
    Palmer, M., Gildea, D., & Kingsbury, P. (2005). The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational linguistics, 31(1), 71-106.
    Park, J. C., Palmer, M. S., & Washburn, C. (1997). An English Grammar Checker as a Writing Aid for Students of English as a Second Language. Paper presented at the Fifth Conference on Applied Natural Language Processing: Descriptions of System Demonstrations and Videos, Washington, DC, USA.
    Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global Vectors for Word Representation. Paper presented at the Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar.
    Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep Contextualized Word Representations. arXiv preprint arXiv:1802.05365.
    Polio, C., & Fleck, C. (1998). “If I Only Had More Time:” Esl Learners' Changes in Linguistic Accuracy on Essay Revisions. Journal of Second Language Writing, 7(1), 43-68.
    Pradhan, S., Hacioglu, K., Ward, W., Martin, J. H., & Jurafsky, D. (2005). Semantic Role Chunking Combining Complementary Syntactic Views. Paper presented at the Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), Ann Arbor, Michigan.
    Pradhan, S., Moschitti, A., Xue, N., Ng, H. T., Björkelund, A., Uryupina, O., . . . Zhong, Z. (2013). Towards Robust Linguistic Analysis Using Ontonotes. Paper presented at the Proceedings of the Seventeenth Conference on Computational Natural Language Learning, Sofia, Bulgaria.
    Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving Language Understanding by Generative Pre-Training. URL https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.
    Rock, J. L. (2007). The Impact of Short‐Term Use of Criterionsm on Writing Skills in Ninth Grade. ETS Research Report Series, 2007(1), i-24.
    See, A., Liu, P. J., & Manning, C. D. (2017). Get to the Point: Summarization with Pointer-Generator Networks. Paper presented at the Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada.
    Shannon, C. E. (1948). A Mathematical Theory of Communication. The Bell system technical journal, 27(3), 379-423.
    Shi, C., Liu, S., Ren, S., Feng, S., Li, M., Zhou, M., . . . Wang, H. (2016). Knowledge-Based Semantic Embedding for Machine Translation. Paper presented at the Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany.
    Sundararaman, D., Subramanian, V., Wang, G., Si, S., Shen, D., Wang, D., & Carin, L. (2019). Syntax-Infused Transformer and Bert Models for Machine Translation and Natural Language Understanding. arXiv preprint arXiv:1911.06156.
    Surdeanu, M., Johansson, R., Meyers, A., Màrquez, L., & Nivre, J. (2008). The Conll 2008 Shared Task on Joint Parsing of Syntactic and Semantic Dependencies. Paper presented at the CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning, Manchester, England.
    Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to Sequence Learning with Neural Networks. Paper presented at the Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, Montreal, Canada.
    Tan, Z., Wang, M., Xie, J., Chen, Y., & Shi, X. (2018). Deep Semantic Role Labeling with Self-Attention. Paper presented at the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), New Orleans, Louisiana, USA.
    U.S. Department of Education, N. C. f. E. S. N. (2019). Table 204.20. English Language Learner (Ell) Students Enrolled in Public Elementary and Secondary Schools, by State: Selected Years, Fall 2000 through Fall 2017. Retrieved from https://nces.ed.gov/programs/digest/d19/tables/dt19_204.20.asp
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . Polosukhin, I. (2017). Attention Is All You Need. arXiv preprint arXiv:1706.03762.
    Wang, A., & Cho, K. (2019). Bert Has a Mouth, and It Must Speak: Bert as a Markov Random Field Language Model. Paper presented at the Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, Minneapolis, Minnesota.
    Wang, R., Zhao, H., Ploux, S., Lu, B.-L., & Utiyama, M. (2016). A Bilingual Graph-Based Semantic Model for Statistical Machine Translation. Paper presented at the Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, New York, USA.
    Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., . . . Macherey, K. (2016). Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv preprint arXiv:1609.08144.
    Xie, Z., Avati, A., Arivazhagan, N., Jurafsky, D., & Ng, A. Y. (2016). Neural Language Correction with Character-Based Attention. arXiv preprint arXiv:1603.09727.
    Xu, S., Zhang, J., Chen, J., & Qin, L. (2019). Erroneous Data Generation for Grammatical Error Correction. Paper presented at the Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, Florence, Italy.
    Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., & Le, Q. V. (2019). Xlnet: Generalized Autoregressive Pretraining for Language Understanding. Paper presented at the Advances in Neural Information Processing Systems 32 (NeurIPS 2019), Vancouver, CANADA.
    Yannakoudakis, H., Briscoe, T., & Medlock, B. (2011). A New Dataset and Method for Automatically Grading Esol Texts. Paper presented at the Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, Oregon, USA.
    Yuan, Z., & Briscoe, T. (2016). Grammatical Error Correction Using Neural Machine Translation. Paper presented at the Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, California.
    Yuan, Z., Stahlberg, F., Rei, M., Byrne, B., & Yannakoudakis, H. (2019). Neural and Fst-Based Approaches to Grammatical Error Correction. Paper presented at the Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, Florence, Italy.
    Zhang, Z., Wu, Y., Zhao, H., Li, Z., Zhang, S., Zhou, X., & Zhou, X. (2020). Semantics-Aware Bert for Language Understanding. arXiv preprint arXiv:1909.02209.
    Zhao, H., Chen, W., Kazama, J. i., Uchimoto, K., & Torisawa, K. (2009). Multilingual Dependency Learning: Exploiting Rich Features for Tagging Syntactic and Semantic Dependencies. Paper presented at the Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, Boulder, Colorado.
    Zhao, W., Wang, L., Shen, K., Jia, R., & Liu, J. (2019). Improving Grammatical Error Correction Via Pre-Training a Copy-Augmented Architecture with Unlabeled Data. Paper presented at the Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota.
    Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., & Fidler, S. (2015). Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. Paper presented at the Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago.
    邱錦田(民89)。知識經濟時代之我國科技競爭力分析,科學發展月刋,28(10)。

    下載圖示 校內:2023-08-01公開
    校外:2023-08-01公開
    QR CODE