| 研究生: |
譚宇翔 Tan, Yu-Hsiang |
|---|---|
| 論文名稱: |
提升法律自然語言處理的多語言一致性與事實準確性:知識圖譜擴增檢索方法 Knowledge Graph-Augmented Retrieval for Enhancing Multilingual Consistency and Factual Accuracy in Legal Natural Language Processing |
| 指導教授: |
李韶曼
Lee, Shao-Man |
| 學位類別: |
碩士 Master |
| 系所名稱: |
敏求智慧運算學院 - 智慧運算碩士學位學程 MS Degree in Intelligent Computing |
| 論文出版年: | 2024 |
| 畢業學年度: | 112 |
| 語文別: | 英文 |
| 論文頁數: | 89 |
| 中文關鍵詞: | 多語言一致性 、知識圖譜 、檢索增強生成 、大型語言模型 |
| 外文關鍵詞: | multilingual consistency, knowledge graph, retrieval-augmented generation, large language models |
| ORCID: | 0009-0005-5134-377X |
| 相關次數: | 點閱:254 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究探討如何通過整合檢索增強生成(Retrieval-Augmented Generation, RAG)技術與大型語言模型(Large Language Models, LLMs)來提升其在法律文件上的性能,解決準確性和多語言性的挑戰。利用知識圖譜和結構化資料檢索機制,增強 LLMs 對法律術語的理解和生成能力,應對不同語言中法律溝通的複雜性和專門性。在命名實體識別和法律問答的實驗評估中顯示出模型性能的提升。實驗結果顯示 RAG 方法在增強法律應用中 LLMs 回應的有效性,以及結構化知識法律本體化三元組在資訊檢索中的關鍵作用,透過餘弦相似度捕捉法律文本的語義細節以提升多語言一致性。
研究發現強調了整合外部知識源以彌補 LLMs 限制的重要性,使法律自然語言處理(Natural Language Processing, NLP)應用更加精確、相關且語言上多樣化。然而,研究也突顯需要國家和國際層面的共同努力,建立全面、公開可存取的法律資料庫,以達成公平且無偏見的法律 NLP 。雖然整合外部知識源可增強精確度、相關性和語言多樣性,但實現真正的公平需要通過在國家和國際層面積極發展堅固的開放法律資料基礎設施來解決可存取性的挑戰。
This study investigates how the integration of Retrieval-Augmented Generation (RAG) techniques with Large Language Models (LLMs) can enhance their performance on legal documents, addressing the challenges of accuracy and multilingualism. By leveraging knowledge graphs and structured data retrieval mechanisms, this research aims to bolster LLMs' comprehension and generation of legal terminology, tackling the inherent complexity and specificity of legal communication across different languages. Experimental evaluations on Named Entity Recognition and Legal Question-Answering demonstrate substantial improvements in model performance. Results underscore the effectiveness of RAG approaches in enhancing LLMs’ responses for legal applications and the pivotal role of structured knowledge legal ontologize triples in information retrieval, with cosine similarity measures capturing semantic nuances of legal texts to improve multilingual consistency.
The findings emphasize integrating external knowledge sources to mitigate the limitations of LLMs, enabling more precise, relevant, and linguistically diverse legal Natural Language Processing (NLP) applications. However, they also highlight the need for concerted national and international efforts to build comprehensive, openly accessible legal data repositories across jurisdictions to achieve equitable and unbiased legal NLP. While integrating external knowledge sources enhances precision, relevance, and linguistic diversity, true fairness necessitates tackling accessibility challenges through robust open legal data infrastructure development at national and international levels.
[1] Ali, B. A categorical archive of chatgpt failures, February 01, 2023 2023.
[2] Almazrouei, E., Alobeidli, H., Alshamsi, A., Cappelli, A., Cojocaru, R., Debbah, M., Goffinet, , Hesslow, D., Launay, J., Malartic, Q., Mazzotta, D., Noune, B., Pannier, B., and Penedo, G. The falcon series of open language models, November 01, 2023 2023.
[3] Arreerard, R., Mander, S., and Piao, S. Survey on thai nlp language resources and tools. Proceedings of the Thirteenth Language Resources and Evaluation Conference, European Language Resources Association, pp. 6495 6505.
[4] Asai, A., and Choi, E. Challenges in information-seeking qa: Unanswerable questions and paragraph retrieval. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Con ference on Natural Language Processing (Volume 1: Long Papers), Association for Computational Linguistics, pp. 1492–1504.
[5] Auer, S., Bizer, C., Kobilarov, G., Lehmann, J., Cyganiak, R., and Ives, Z. Dbpedia: A nucleus for a web of open data. The Semantic Web, Springer Berlin Heidelberg, pp. 722–735.
[6] Babych, B., and Hartley, A. Improving machine translation quality with automatic named entity recognition. Proceedings of the 7th International EAMT workshop on MT and other language technology tools, Improving MT through other language technology tools, Resource and tools for building MT at EACL 2003.
[7] Bang, Y., Cahyawijaya, S., Lee, N., Dai, W., Su, D., Wilie, B., Love nia, H., Ji, Z., Yu, T., Chung, W., Do, Q. V., Xu, Y., and Fung, P. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, halluci nation, and interactivity. Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chap ter of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, pp. 675–718.
[8] Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big??, 2021.
[9] Bhattacharya, P., Ghosh, K., Pal, A., and Ghosh, S. Methods for computing legal document similarity: A comparative study, April 01, 2020 2020.
[10] Blair-Stanek, A., Holzenberger, N., and Durme, B. V. Can gpt-3 perform statutory reasoning?, 2023.
[11] Bommarito, Michael, I., and Katz, D. M. Gpt takes the bar exam, De cember 01, 2022 2022.
[12] Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Quincy Davis, J., Demszky, D., Donahue, C., Doumbouya, M., Durmus, E., Ermon, S., Etchemendy, J., Ethayarajh, K., Fei-Fei, L., Finn, C., Gale, T., Gillespie, L., Goel, K., Goodman, N., Grossman, S., Guha, N., Hashimoto, T., Henderson, P., Hewitt, J., Ho, D. E., Hong, J., Hsu, K., Huang, J., Icard, T., Jain, S., Jurafsky, D., Kalluri, P., Karamcheti, S., Keeling, G., Khani, F., Khattab, O., Koh, P. W., Krass, M., Krishna, R., Kudi tipudi, R., Kumar, A., Ladhak, F., Lee, M., Lee, T., Leskovec, J., Levent, I., Li, X. L., Li, X., Ma, T., Malik, A., Manning, C. D., Mirchandani, S., Mitchell, E., Munyikwa, Z., Nair, S., Narayan, A., Narayanan, D., Newman, B., Nie, A., Niebles, J. C., Nilforoshan, H., Nyarko, J., Ogut, G., Orr, L., Papadimitriou, I., Park, J. S., Piech, C., Portelance, E., Potts, C., Raghunathan, A., Reich, R., Ren, H., Rong, F., Roohani, Y., Ruiz, C., Ryan, J., R´ e, C., Sadigh, D., Sagawa, S., Santhanam, K., Shih, A., Srinivasan, K., Tamkin, A., Taori, R., Thomas, A. W., Tram` er, F., Wang, R. E., Wang, W., et al. On the opportunities and risks of foundation models, August 01, 2021 2021.
[13] Bonifacio, L. H., Vilela, P. A., Lobato, G. R., and Fernandes, E. R. A study on the impact of intradomain finetuning of deep language models for legal named entity recognition in portuguese. Intelligent Systems, Springer In ternational Publishing, pp. 648–662.
[14] Chalkidis, I. Chatgpt may pass the bar exam soon, but has a long way to go for the lexglue benchmark, March 01, 2023 2023.
[15] Chalkidis, I., Androutsopoulos, I., and Aletras, N. Neural legal judg ment prediction in english. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, pp. 4317–4323.
[16] Chan, Y. S., and Roth, D. Exploiting syntactico-semantic structures for relation extraction. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, pp. 551–560.
[17] Chataut, S., Do, T., Dip Shrestha Gurung, B., Aryal, S., Khanal, A., Lushbough, C., and Gnimpieba, E. Comparative study of domain driven terms extraction using large language models, April 01, 2024 2024.
[18] Chen, Q., Sun, H., Liu, H., Jiang, Y., Ran, T., Jin, X., Xiao, X., Lin, Z., Chen, H., and Niu, Z. An extensive benchmark study on biomedical text generation and mining with chatgpt. Bioinformatics 39, 9 (2023).
[19] Cho, W. I., Moon, S., and Song, Y. Open korean corpora: A practical report. Proceedings of Second Workshop for NLP Open Source Software (NLP OSS), Association for Computational Linguistics, pp. 85–93.
[20] Choi, J. H., Hickman, K. E., Monahan, A., and Schwarcz, D. B. Chatgpt goes to law school. SSRN Electronic Journal (2023).
[21] Collenette, J., Atkinson, K., and Bench-Capon, T. Explainable ai tools for legal reasoning about cases: A study on the european court of human rights. Artificial Intelligence 317 (2023), 103861.
[22] Conneau, A., Rinott, R., Lample, G., Williams, A., Bowman, S., Schwenk, H., and Stoyanov, V. Xnli: Evaluating cross-lingual sentence representations. Proceedings of the 2018 Conference on Empirical Methods in Nat ural Language Processing, Association for Computational Linguistics, pp. 2475 2485.
[23] Dahl, M., Magesh, V., Suzgun, M., and Ho, D. E. Large legal fictions: Profiling legal hallucinations in large language models, January 01, 2024 2024.
[24] Danielsson, P.-E. Euclidean distance mapping. Computer Graphics and Image Processing 14, 3 (1980), 227–248.
[25] Dong, X., Gabrilovich, E., Heitz, G., Horn, W., Lao, N., Murphy, K., Strohmann, T., Sun, S., and Zhang, W. Knowledge vault: a web-scale approach to probabilistic knowledge fusion, 2014.
[26] Duan, X., Wang, B., Wang, Z., Ma, W., Cui, Y., Wu, D., Wang, S., Liu, T., Huo, T., Hu, Z., Wang, H., and Liu, Z. Cjrc: A reliable human-annotated benchmark dataset for chinese judicial reading comprehension. Chinese Computational Linguistics, Springer International Publishing, pp. 439 451.
[27] Elkins, Z., Ginsburg, T., Melton, J., Shaffer, R., Sequeda, J. F., and Miranker, D. P. Constitute: The world’s constitutions to read, search, and compare. Journal of Web Semantics 27-28 (2014), 10–18.
[28] Févry, T., Baldini Soares, L., FitzGerald, N., Choi, E., and Kwiatkowski, T. Entities as experts: Sparse memory access with entity supervision, April 01, 2020 2020.
[29] Ganguly, D., Conrad, J. G., Ghosh, K., Ghosh, S., Goyal, P., Bhat tacharya, P., Nigam, S. K., and Paul, S. Legal ir and nlp: The history, challenges, and state-of-the-art, 2023.
[30] Gero, Z., Singh, C., Cheng, H., Naumann, T., Galley, M., Gao, J., and Poon, H. Self-verification improves few-shot clinical information extraction, May 01, 2023 2023. IMLH 2023.
[31] Ghosh, S., Evuru, C. K. R., Kumar, S., Ramaneswaran, S., Sakshi, S., Tyagi, U., and Manocha, D. Dale: Generative data augmentation for low-resource legal nlp. Proceedings of the 2023 Conference on Empirical Meth ods in Natural Language Processing, Association for Computational Linguistics, pp. 8511–8565.
[32] Gu, J., Hassan, H., Devlin, J., and Li, V. O. K. Universal neural ma chine translation for extremely low resource languages. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Associa tion for Computational Linguistics, pp. 344–354.
[33] Gu, L., Zhang, W., Wang, Y., Li, B., and Mao, S. Named entity recognition in judicial field based on bert-bilstm-crf model. In 2020 International Workshop on Electronic Communication and Artificial Intelligence (IWECAI) (2020), 2020 International Workshop on Electronic Communication and Artificial Intelligence (IWECAI), pp. 170–174.
[34] Guha, N., Nyarko, J., Ho, D. E., R´ e, C., Chilton, A., Narayana, A., Chohlas-Wood, A., Peters, A., Waldon, B., Rockmore, D. N., Zambrano, D., Talisman, D., Hoque, E., Surani, F., Fagan, F., Sarfaty, G., Dickinson, G. M., Porat, H., Hegland, J., Wu, J., Nudell, J., Niklaus, J., Nay, J., Choi, J. H., Tobia, K., Hagan, M., Ma, M., Livermore, M., Rasumov-Rahe, N., Holzenberger, N., Kolt, N., Hen derson, P., Rehaag, S., Goel, S., Gao, S., Williams, S., Gandhi, S., Zur, T., Iyer, V., and Li, Z. Legalbench: A collaboratively built benchmark for measuring legal reasoning in large language models, August 01, 2023 2023.
[35] Guo, J., Xu, G., Cheng, X., and Li, H. Named entity recognition in query, 2009.
[36] Guu, K., Lee, K., Tung, Z., Pasupat, P., and Chang, M.-W. Realm: retrieval-augmented language model pre-training, 2020.
[37] Heffernan, K., Ç elebi, O., and Schwenk, H. Bitext mining using distilled sentence representations for low-resource languages. Findings of the Association for Computational Linguistics: EMNLP 2022, Association for Computational Linguistics, pp. 2101–2112.
[38] Heinzerling, B., and Inui, K. Language models as knowledge bases: On entity representations, storage capacity, and paraphrased queries. Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, Association for Computational Linguistics, pp. 1772–1791.
[39] Henman, P. Improving public services using artificial intelligence: possibilities, pitfalls, governance. Asia Pacific Journal of Public Administration 42, 4 (2020), 209–221.
[40] Hochreiter, S., and Schmidhuber, J. Long short-term memory. Neural Computation 9, 8 (1997), 1735–1780.
[41] Hoekstra, R., Breuker, J., Bello, M. D., and Boer, A. Lkif core: Principled ontology development for the legal domain, 2009.
[42] Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., and Johnson, M. Xtreme: A massively multilingual multi-task benchmark for evaluating cross lingual generalization, 2020.
[43] Hu, L., Liu, Z., Zhao, Z., Hou, L., Nie, L., and Li, J. A survey of knowledge enhanced pre-trained language models, November 01, 2022 2023.
[44] Huang, L., Yu, W., Ma, W., Zhong, W., Feng, Z., Wang, H., Chen, Q., Peng, W., Feng, X., Qin, B., and Liu, T. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions, November 01, 2023 2023. Work in progress; 49 pages.
[45] Hwang, W., Lee, D., Cho, K., Lee, H., and Seo, M. A multi-task bench mark for korean legal language understanding and judgement prediction, June 01, 2022 2022. Accepted at NeurIPS 2022 Datasets and Benchmarks track.
[46] Izacard, G., and Grave, E. Leveraging passage retrieval with generative models for open domain question answering, July 01, 2020 2020.
[47] Izacard, G., Lewis, P., Lomeli, M., Hosseini, L., Petroni, F., Schick, T., Dwivedi-Yu, J., Joulin, A., Riedel, S., and Grave, E. Atlas: Few shot learning with retrieval augmented language models, August 01, 2022 2022.
[48] Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Singh Chaplot, D., de las Casas, D., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Renard Lavaud, L., Lachaux, M.-A., Stock, P., Le Scao, T., Lavril, T., Wang, T., Lacroix, T., and El Sayed, W. Mistral 7b, October 01, 2023 2023. Models and code are available at https://mistral.ai/news/announcing- mistral-7b/.
[49] Jiang, Z., Zhao, D., Zheng, J., and Chen, Y. A study on differences between simplified and traditional chinese based on complex network analysis of the word co-occurrence networks. Comput Intell Neurosci 2020 (2020), 8863847.
[50] Karpukhin, V., Oguz, B., Min, S., Lewis, P., Wu, L., Edunov, S., Chen, D., and Yih, W.-t. Dense passage retrieval for open-domain question answering. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, pp. 6769–6781.
[51] Karpukhin, V., Oğuz, B., Min, S., Lewis, P., Wu, L., Edunov, S., Chen, D., and Yih, W.-t. Dense passage retrieval for open-domain question answering, April 01, 2020 2020. EMNLP 2020.
[52] Katz, D. M., Hartung, D., Gerlach, L., Jana, A., and Bommarito, Michael J., I. Natural language processing in the legal domain, February 01, 2023 2023.
[53] Kowsrihawat, K., Vateekul, P., and Boonkwan, P. Predicting judicial decisions of criminal cases from thai supreme court using bi-directional GRU with attention mechanism. In 2018 5th Asian Conference on Defense Technology (ACDT), pp. 50–55.
[54] Krasadakis, P., Sakkopoulos, E., and Verykios, V. S. A survey on challenges and advances in natural language processing with a focus on legal informatics and low-resource languages. Electronics 13, 3 (2024), 648.
[55] Kumar, S., Reddy, P. K., Reddy, V. B., and Singh, A. Similarity analysis of legal judgments, 2011.
[56] Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Epstein, D., Polosukhin, I., Devlin, J., Lee, K., Toutanova, K., Jones, L., Kelcey, M., Chang, M.-W., Dai, A. M., Uszkoreit, J., Le, Q., and Petrov, S. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics 7 (2019), 452–466.
[57] Lee, S., Jang, H., Baik, Y., Park, S., and Shin, H. Kr-Bert: A small-scale Korean-specific language model, August 01, 2020 2020. 7 pages.
[58] Lee, S.-M., Tan, Y.-H., and Yu, H.-T. Learner: Few-shot legal argument named entity recognition, 2023.
[59] Leitner, E., Rehm, G., and Moreno-Schneider, J. Fine-grained named entity recognition in legal documents. Semantic Systems. The Power of AI and Knowledge Graphs, Springer International Publishing, pp. 272–287.
[60] Leitner, E., Rehm, G., and Moreno-Schneider, J. A dataset of ger man legal documents for named entity recognition. Proceedings of the Twelfth Language Resources and Evaluation Conference, European Language Resources Association, pp. 4478–4485.
[61] Li, T., Ma, X., Zhuang, A., Gu, Y., Su, Y., and Chen, W. Few-shot in-context learning on knowledge base question answering. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, pp. 6966–6980.
[62] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., Newman, B., Yuan, B., Yan, B., Zhang, C., Cosgrove, C., Manning, C. D., R´ e, C., Acosta-Navas, D., Hudson, D. A., Zelikman, E., Durmus, E., Ladhak, F., Rong, F., Ren, H., Yao, H., Wang, J., Santhanam, K., Orr, L., Zheng, L., Yuksekgonul, M., Suzgun, M., Kim, N., Guha, N., Chat terji, N., Khattab, O., Henderson, P., Huang, Q., Chi, R., Xie, S. M., Santurkar, S., Ganguli, S., Hashimoto, T., Icard, T., Zhang, T., Chaudhary, V., Wang, W., Li, X., Mai, Y., Zhang, Y., and Koreeda, Y. Holistic evaluation of language models, November 01, 2022 2022.
[63] Liang, Y., Duan, N., Gong, Y., Wu, N., Guo, F., Qi, W., Gong, M., Shou, L., Jiang, D., Cao, G., Fan, X., Zhang, R., Agrawal, R., Cui, E., Wei, S., Bharti, T., Qiao, Y., Chen, J.-H., Wu, W., Liu, S., Yang, F., Campos, D., Majumder, R., and Zhou, M. Xglue: A new benchmark dataset for cross-lingual pre-training, understanding and generation. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, pp. 6008–6018.
[64] Liu, A., Du, J., and Stoyanov, V. Knowledge-augmented language model and its application to unsupervised named-entity recognition. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, pp. 1142–1150.
[65] Louis, A., van Dijck, G., and Spanakis, G. Interpretable long-form legal question answering with retrieval-augmented large language models, September 01, 2023 2023.
[66] Luz de Araujo, P. H., de Campos, T. E., de Oliveira, R. R. R., Stauffer, M., Couto, S., and Bermejo, P. Lener-br: A dataset for named entity recognition in brazilian legal text. Computational Processing of the Portuguese Language, Springer International Publishing, pp. 313–323.
[67] Medvedeva, M., Vols, M., and Wieling, M. Using machine learning to predict decisions of the European court of human rights. Artificial Intelligence and Law 28, 2 (2019), 237–266.
[68] Mialon, G., Dess` ı, R., Lomeli, M., Nalmpantis, C., Pasunuru, R., Raileanu, R., Rozi` ere, B., Schick, T., Dwivedi-Yu, J., Celikyilmaz, A., Grave, E., LeCun, Y., and Scialom, T. Augmented language models: a survey, February 01, 2023 2023.
[69] Mishra, A., and Jain, S. K. A survey on question answering systems with classification. Journal of King Saud University- Computer and Information Sciences 28, 3 (2016), 345–361.
[70] Miwa, M., and Sasaki, Y. Modeling joint entity and relation extraction with table representation. Proceedings of the 2014 Conference on Empirical Meth ods in Natural Language Processing (EMNLP), Association for Computational Linguistics, pp. 1858–1869.
[71] Moiseev, F., Dong, Z., Alfonseca, E., and Jaggi, M. Skill: Structured knowledge infusion for large language models. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, pp. 1581–1588.
[72] Moro, G., Piscaglia, N., Ragazzi, L., and Italiani, P. Multi-language transfer learning for low-resource legal case summarization. Artificial Intelligence and Law (2023).
[73] Myers, D., Mohawesh, R., Chellaboina, V. I., Sathvik, A. L., Venkatesh, P., Ho, Y.-H., Henshaw, H., Alhawawreh, M., Berdik, D., and Jararweh, Y. Foundation and large language models: fundamentals, challenges, opportunities, and social impacts. Cluster Computing 27, 1 (2024), 1–26.
[74] Naik, V., Patel, P., and Kannan, R. Legal entity extraction: An experimental study of ner approach for legal documents. International Journal of Advanced Computer Science and Applications 14, 3 (2023).
[75] Nakano, R., Hilton, J., Balaji, S., Wu, J., Ouyang, L., Kim, C., Hesse, C., Jain, S., Kosaraju, V., Saunders, W., Jiang, X., Cobbe, K., Eloundou, T., Krueger, G., Button, K., Knight, M., Chess, B., and Schulman, J. Webgpt: Browser-assisted question-answering with human feedback, December 01, 2021 2022.
[76] Ng, Y., Miyashita, D., Hoshi, Y., Morioka, Y., Torii, O., Kodama, T., and Deguchi, J. Simplyretrieve: A private and lightweight retrieval-centric generative ai tool, August 01, 2023 2023.
[77] Nguyen, H. L., Nguyen, D. Q., Nguyen, H. T., Pham, T. T., Nguyen, H. D., Nguyen, T. A., Vuong, T. H. Y., and Nguyen, H. T. Neco@alqac 2023: Legal domain knowledge acquisition for low-resource languages through data enrichment. In 2023 15th International Conference on Knowledge and Systems Engineering (KSE) (2023), pp. 1–6.
[78] Nishikawa-Pacher, A., and Hamann, H. Is every law for everyone? as sessing access to national legislation through official legal databases around the world. Oxf J Leg Stud 43, 2 (2023), 298–321.
[79] Ohmer, X., Bruni, E., and Hupkes, D. Separating form and meaning: Using self-consistency to quantify task understanding across multiple senses, May 01, 2023 2023.
[80] OpenAI, Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Leoni Aleman, F., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., Avila, R., Babuschkin, I., Balaji, S., Balcom, V., Bal tescu, P., Bao, H., Bavarian, M., Belgum, J., Bello, I., Berdine, J., Bernadett-Shapiro, G., Berner, C., Bogdonoff, L., Boiko, O., Boyd, M., Brakman, A.-L., Brockman, G., Brooks, T., Brundage, M., Button, K., Cai, T., Campbell, R., Cann, A., Carey, B., Carl son, C., Carmichael, R., Chan, B., Chang, C., Chantzis, F., Chen, D., Chen, S., Chen, R., Chen, J., Chen, M., Chess, B., Cho, C., Chu, C., Chung, H. W., Cummings, D., Currier, J., Dai, Y., Decareaux, C., Degry, T., Deutsch, N., Deville, D., Dhar, A., Dohan, D., Dowling, S., Dunning, S., Ecoffet, A., Eleti, A., Eloundou, T., Farhi, D., Fedus, L., Felix, N., Posada Fishman, S., Forte, J., Fulford, I., Gao, L., Georges, E., Gibson, C., Goel, V., Gogineni, T., Goh, G., Gontijo-Lopes, R., Gordon, J., Grafstein, M., Gray, S., Greene, R., Gross, J., Gu, S. S., Guo, Y., Hallacy, C., Han, J., Harris, J., He, Y., Heaton, M., Heidecke, J., Hesse, C., Hickey, A., Hickey, W., Hoeschele, P., Houghton, B., Hsu, K., Hu, S., Hu, X., Huizinga, J., Jain, S., Jain, S., et al. Gpt-4 technical report, March 01, 2023 2023.
[81] Pais, V., Mitrofan, M., Gasan, C. L., Coneschi, V., and Ianov, A. Named entity recognition in the Romanian legal domain. Proceedings of the Natural Legal Language Processing Workshop 2021, Association for Computational Linguistics, pp. 9–18.
[82] Pan, S., Luo, L., Wang, Y., Chen, C., Wang, J., and Wu, X. Unifying large language models and knowledge graphs: A roadmap. IEEE Transactions on Knowledge and Data Engineering (2023), 1–20.
[83] Park, J., Patel, A., Zia Khan, O., Kim, H. J., and Kim, J.-K. Graph guided reasoning for multi-hop question answering in large language models, November 01, 2023 2023.
[84] Paul, S., Goyal, P., and Ghosh, S. Lesicin: A heterogeneous graph-based approach for automatic legal statute identification from Indian legal documents. Proceedings of the AAAI Conference on Artificial Intelligence 36, 10 (2022), 11139–11146.
[85] Peters, M. E., Neumann, M., Logan, R., Schwartz, R., Joshi, V., Singh, S., and Smith, N. A. Knowledge enhanced contextual word representations. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan guage Processing (EMNLP-IJCNLP), Association for Computational Linguistics, pp. 43–54.
[86] Petkova, D., and Croft, W. B. Proximity-based document representation for named entity retrieval, 2007.
[87] Ponti, E. M., Glavaš, G., Majewska, O., Liu, Q., Vulić, I., and Ko rhonen, A. Xcopa: A multilingual dataset for causal commonsense reasoning. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, pp. 2362–2376.
[88] Rabelo, J., Goebel, R., Kim, M.-Y., Kano, Y., Yoshioka, M., and Satoh, K. Overview and discussion of the competition on legal information extraction/entailment (coliee) 2021. The Review of Socionetwork Strategies 16, 1 (2022), 111–133.
[89] Ramshaw, L. A., and Marcus, M. P. Text Chunking Using Transformation Based Learning. Springer Netherlands, Dordrecht, 1999, pp. 157–176.
[90] Ren, X., Wu, Z., He, W., Qu, M., Voss, C. R., Ji, H., Abdelzaher, T. F., and Han, J. Cotype: Joint extraction of typed entities and relations with knowledge bases, 2017.
[91] Savelka, J., Walker, V. R., Grabmair, M., and Ashley, K. D. Sentence boundary detection in adjudicatory decisions in the united states. Traitement Automatique des Langues 58, 2 (2017), 21–45.
[92] Shi, W., Min, S., Yasunaga, M., Seo, M., James, R., Lewis, M., Zettlemoyer, L., and Yih, W.-t. Replug: Retrieval-augmented black-box language models, January 01, 2023 2023.
[93] Sparck Jones, K. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation 28, 1 (1972), 11–21.
[94] Sun, H., Dhingra, B., Zaheer, M., Mazaitis, K., Salakhutdinov, R., and Cohen, W. Open domain question answering using early fusion of knowledge bases and text. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, pp. 4231–4242.
[95] Team, G., Mesnard, T., Hardin, C., Dadashi, R., Bhupatiraju, S., Pathak, S., Sifre, L., Rivière, M., Kale, M. S., Love, J., Tafti, P., Hussenot, L., Sessa, P. G., Chowdhery, A., Roberts, A., Barua, A., Botev, A., Castro-Ros, A., Slone, A., Héliou, A., Tacchetti, A., Bulanova, A., Paterson, A., Tsai, B., Shahriari, B., Le Lan, C., Choquette-Choo, C. A., Crepy, C., Cer, D., Ippolito, D., Reid, D., Buchatskaya, E., Ni, E., Noland, E., Yan, G., Tucker, G., Muraru, G.-C., Rozhdestvenskiy, G., Michalewski, H., Tenney, I., Gr ishchenko, I., Austin, J., Keeling, J., Labanowski, J., Lespiau, J. B., Stanway, J., Brennan, J., Chen, J., Ferret, J., Chiu, J., Mao Jones, J., Lee, K., Yu, K., Millican, K., Lowe Sjoesund, L., Lee, L., Dixon, L., Reid, M., Mikula, M., Wirth, M., Sharman, M., Chinaev, N., Thain, N., Bachem, O., Chang, O., Wahltinez, O., Bailey, P., Michel, P., Yotov, P., Chaabouni, R., Comanescu, R., Jana, R., Anil, R., McIlroy, R., Liu, R., Mullins, R., Smith, S. L., Borgeaud, S., Girgin, S., Douglas, S., Pandya, S., Shakeri, S., De, S., Klimenko, T., Hennigan, T., Feinberg, V., Stokowiec, W., Chen, Y.-h., Ahmed, Z., Gong, Z., Warkentin, T., Peran, L., Giang, M., Farabet, C., Vinyals, O., Dean, J., Kavukcuoglu, K., Hassabis, D., Ghahramani, Z., Eck, D., et al. Gemma: Open models based on gemini research and technology, March 01, 2024 2024.
[96] Tian, Y., Song, H., Wang, Z., Wang, H., Hu, Z., Wang, F., Chawla, N. V., and Xu, P. Graph neural prompting with large language models, September 01, 2023 2023.
[97] Tjong Kim Sang, E. F., and De Meulder, F. Introduction to the conll 2003 shared task: Language-independent named entity recognition. Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pp. 142–147.
[98] Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., Bikel, D., Blecher, L., Canton Ferrer, C., Chen, M., Cucurull, G., Esiobu, D., Fernandes, J., Fu, J., Fu, W., Fuller, B., Gao, C., Goswami, V., Goyal, N., Hartshorn, A., Hosseini, S., Hou, R., Inan, H., Kardas, M., Kerkez, V., Khabsa, M., Kloumann, I., Korenev, A., Singh Koura, P., Lachaux, M.-A., Lavril, T., Lee, J., Liskovich, D., Lu, Y., Mao, Y., Martinet, X., Mihaylov, T., Mishra, P., Moly bog, I., Nie, Y., Poulton, A., Reizenstein, J., Rungta, R., Saladi, K., Schelten, A., Silva, R., Smith, E. M., Subramanian, R., Tan, X. E., Tang, B., Taylor, R., Williams, A., Kuan, J. X., Xu, P., Yan, Z., Zarov, I., Zhang, Y., Fan, A., Kambadur, M., Narang, S., Ro driguez, A., Stojnic, R., Edunov, S., and Scialom, T. Llama 2: Open foundation and fine-tuned chat models, July 01, 2023 2023.
[99] Trias, F., Wang, H., Jaume, S., and Idreos, S. Named entity recognition in historic legal text: A transformer and state machine ensemble method. Pro ceedings of the Natural Legal Language Processing Workshop 2021, Association for Computational Linguistics, pp. 172–179.
[100] Ungureanu, D., Badeanu, M., Marica, G. C., Dascalu, M., and Tufis, D. I. Establishing a baseline of romanian speech-to-text models. In 2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD) (2021), pp. 132–138.
[101] Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. Glue: A multi-task benchmark and analysis platform for natural language understanding. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Ana lyzing and Interpreting Neural Networks for NLP, Association for Computational Linguistics, pp. 353–355.
[102] Wang, K., Duan, F., Wang, S., Li, P., Xian, Y., Yin, C., Rong, W., and Xiong, Z. Knowledge-driven cot: Exploring faithful reasoning in llms for knowledge-intensive question answering, August 01, 2023 2023.
[103] Wang, X., Gao, T., Zhu, Z., Zhang, Z., Liu, Z., Li, J., and Tang, J. Kepler: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics 9 (2021), 176–194.
[104] Wang, Y., Lipka, N., Rossi, R. A., Siu, A., Zhang, R., and Derr, T. Knowledge graph prompting for multi-document question answering, August 01, 2023 2023.
[105] Wang, Y., Tong, H., Zhu, Z., and Li, Y. Nested named entity recognition: A survey. ACM Transactions on Knowledge Discovery from Data 16, 6 (2022), 1–29.
[106] Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., and Zhou, D. Chain-of-thought prompting elicits reasoning in large language models, January 01, 2022 2022.
[107] Wei, X., Wang, S., Zhang, D., Bhatia, P., and Arnold, A. Knowledge enhanced pretrained language models: A compreshensive survey, October 01, 2021 2021.
[108] Weng, Y., Zhu, M., Xia, F., Li, B., He, S., Liu, S., Sun, B., Liu, K., and Zhao, J. Large language models are better reasoners with self-verification. Findings of the Association for Computational Linguistics: EMNLP 2023, Asso ciation for Computational Linguistics, pp. 2550–2575.
[109] Xu, L., Lu, L., Liu, M., Song, C., and Wu, L. Nanjing yunjin intelligent question-answering system based on knowledge graphs and retrieval augmented generation technology. Heritage Science 12, 1 (2024), 118.
[110] Yang, Y., Zhang, Y., Tar, C., and Baldridge, J. Paws-x: A cross-lingual adversarial dataset for paraphrase identification. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter national Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, pp. 3687–3692.
[111] Yuan, F., Yuan, S., Wu, Z., and Li, L. How multilingual is multilingual llm?, November 01, 2023 2023.
[112] Zelenko, D., Aone, C., and Richardella, A. Kernel methods for relation extraction. Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), Association for Computational Linguistics, pp. 71–78.
[113] Zeng, Z., Yu, J., Gao, T., Meng, Y., Goyal, T., and Chen, D. Evaluating large language models at evaluating instruction following, October 01, 2023 2023.
[114] Zhang, S., and Elhadad, N. Unsupervised biomedical named entity recognition: Experiments with clinical and biological texts. Journal of Biomedical Informatics 46, 6 (2013), 1088–1098.
[115] Zhang, W., Zhu, Y., Chen, M., Geng, Y., Huang, Y., Xu, Y., Song, W., and Chen, H. Structure pretraining and prompt tuning for knowledge graph transfer, 2023.
[116] Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., and Liu, Q. Ernie: Enhanced language representation with informative entities. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, pp. 1441–1451.
[117] Zheng, L., Guha, N., Anderson, B. R., Henderson, P., and Ho, D. E. When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings, 2021.
[118] Zheng, S., Wang, F., Bao, H., Hao, Y., Zhou, P., and Xu, B. Joint extraction of entities and relations based on a novel tagging scheme. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, pp. 1227 1236.
[119] Zhong, H., Xiao, C., Tu, C., Zhang, T., Liu, Z., and Sun, M. How does nlp benefit legal system: A summary of legal artificial intelligence. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, pp. 5218–5230.
[120] Zhong, H., Xiao, C., Tu, C., Zhang, T., Liu, Z., and Sun, M. Jec-qa: A legal-domain question answering dataset. Proceedings of the AAAI Conference on Artificial Intelligence 34, 05 (2020), 9701–9708.
[121] Zhong, W., Cui, R., Guo, Y., Liang, Y., Lu, S., Wang, Y., Saied, A., Chen, W., and Duan, N. Agieval: A human-centric benchmark for evaluating foundation models, April 01, 2023 2023.
校內:2026-12-31公開