簡易檢索 / 詳目顯示

研究生: 蘇佳成
Su, Chia-Cheng
論文名稱: 可解釋信用評等機器學習模型之比較研究
A Comparative Study of Explainable Machine Learning Models for Corporate Credit Scoring
指導教授: 徐立群
Shu, Lih-Chyun
學位類別: 碩士
Master
系所名稱: 管理學院 - 會計學系
Department of Accountancy
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 45
中文關鍵詞: 信用評等可解釋人工智慧SHAPLIMEAnchors
外文關鍵詞: Credit Scoring, Explainable AI, SHAP, LIME, Anchors
相關次數: 點閱:181下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 放款是銀行主要營收來源,要有安全的放款就需要做好信用評等,降低違約風險。在過去銀行大多以羅吉斯回歸評價申貸人的信用,隨著科技發展,愈來愈多精準的機器學習方法可以預測信用分數。儘管這些模型能夠精準估算信用評級,卻伴隨著複雜、難以解釋的問題,讓高度監管的銀行業有所卻步。因此,許多學者開始著手研發可解釋人工智慧(eXplainable AI, XAI)技術,提升模型的可解釋性。由於目前在信用評等領域的研究較少同時考量模型預測準確性與可解性,且未將信用評等人員的需求納入評估解釋資訊的依據,所以本研究主要想探討(1) 運用XAI方法之輸出瞭解預測模型的運作並用以提升模型績效;(2) 針對信用貸款一線人員的需求與工作經驗評估XAI方法,並說明何種方法或解釋呈現上對一線人員較佳;(3) 組合多種不同解釋資訊幫助申貸人調整/制定財務策略。研究結果顯示SHAP方法對預測模型整體的解釋能夠協助模型開發者有效率地篩選特徵,提升模型預測能力。對於一線人員而言,呈現各信用風險層級的機率能幫助他們說明原由給予申貸人,本研究使用的LIME方法則滿足此需求。此外,本研究使用LIME結合Anchors方法作為財務建議的依據,也得到一線人員的認同。

    Loans are the main source of revenue for banks. To have safe loans, credit rating must be done well to reduce default risk. In the past, banks mostly used logistic regression to evaluate applicants’ credit. With the development of technology, more and more accurate machine learning methods can predict credit scores. Although these models can accurately estimate credit ratings, they come with complex and difficult-to-explain problems, which makes highly regulated banks hesitant. Therefore, many scholars have begun to develop explainable artificial intelligence (XAI) technology to improve the interpretability of models. Since there is currently less research in the field of credit rating that considers both model prediction accuracy and interpretability at the same time and does not include the needs of credit rating personnel in evaluating explanatory information, this study mainly aims to explore (1) using XAI methods to understand the operation of predictive models and improve model performance; (2) evaluating XAI methods based on the needs and work experience of frontline personnel in credit loans and explaining which methods or explanatory presentations are better for frontline personnel; (3) combining multiple different explanatory information to help loan applicants adjust/develop financial strategies. The research results show that the SHAP method can help model developers efficiently screen features and improve model prediction ability by explaining the overall interpretation of the predictive model. For frontline personnel, presenting the probability of each credit risk level can help them explain why they give loans to applicants. The LIME method used in this study meets this requirement. In addition, this study uses LIME combined with Anchors method as a basis for financial advice, which is also recognized by frontline personnel.

    摘要 I Abstract II 致謝 III Tables VI Figures VII 1. Introduction 1 2. Literature Review 5 2.1 Explainable artificial intelligence 5 2.1.1 SHAP 7 2.1.2 LIME 8 2.1.3 Anchors 9 2.1.4 Counterfactual explanations 10 2.2 Explainable artificial intelligence for credit scoring 11 3. Research Design 13 3.1 Data processing 14 3.2 Modeling 15 3.2.1 Logistic Regression 15 3.2.2 Random Forest 15 3.2.3 XGBoost 16 3.3 Model evaluation 17 3.4 Model explanation 17 3.5 Discussing the results with a practitioner 19 4. Experience results 20 4.1 Model performance 21 4.2 Model understanding 22 4.2.1 Model understanding 22 4.2.2 Performance improvement by SHAP 24 4.2.2 Error analysis 28 4.2.3 Different XAI functions for financial decision-making 32 5. The opinions and suggestions from the practitioner 35 6. Conclusion 37 Reference 39 Appendix - Variable definitions 43

    Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138-52160.
    Identifying and Assessing the Risks of Material Misstatement. https://www.ardf.org.tw/ardf/2022/315.pdf
    Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., & Benjamins, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.
    Berg, M. v. d., & Kuiper, O. (2020). A conceptual framework for explainable ai (XAI): XAI in the financial sector. Hogeschool (University of Applied Science).
    Boza, P., & Evgeniou, T. (2021). Implementing ai principles: Frameworks, processes, and tools.
    Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system. Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining,
    Chitkara, R., Dring, R., Vieira, E., Duln, C., Gao, J., Marty, P., Ballhaus, W., Ladda, S., Ozaki, M., Yoon, H., Linnemeijer, I., Pukha, Y., Jansen, M., Sarai, J., & Sur, P.-A. (2017). 20 years inside the mind of the CEO Technology industry results. https://www.pwc.com/gx/en/ceo-survey/2017/industries/20th-ceo-survey-technology.pdf
    Dastile, X., Celik, T., & Potsane, M. (2020). Statistical and machine learning models in credit scoring: A systematic literature survey. Applied Soft Computing, 91, 106263.
    Dastile, X., Celik, T., & Vandierendonck, H. (2022). Model-agnostic counterfactual explanations in credit scoring. IEEE access, 10, 69543-69554.
    Demajo, L. M., Vella, V., & Dingli, A. (2020). Explainable ai for interpretable credit scoring. arXiv preprint arXiv:2012.03749.
    Grath, R. M., Costabello, L., Van, C. L., Sweeney, P., Kamiab, F., Shen, Z., & Lecue, F. (2018). Interpretable credit application predictions with counterfactual explanations. arXiv preprint arXiv:1811.05245.
    Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI magazine, 40(2), 44-58.
    Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Baum, K. (2021). What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473. https://doi.org/https://doi.org/10.1016/j.artint.2021.103473
    Leo, M., Sharma, S., & Maddulety, K. (2019). Machine Learning in Banking Risk Management: A Literature Review. Risks, 7(1), 29. https://doi.org/10.3390/risks7010029
    Lim, B. Y., Dey, A. K., & Avrahami, D. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems. Proceedings of the SIGCHI conference on human factors in computing systems,
    Liu, W., Fan, H., & Xia, M. (2022). Credit scoring based on tree-enhanced gradient boosting decision trees. Expert Systems with Applications, 189, 116034. https://doi.org/https://doi.org/10.1016/j.eswa.2021.116034
    Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 4768-4777.
    Misheva, B. H., Osterrieder, J., Hirsa, A., Kulkarni, O., & Lin, S. F. (2021). Explainable AI in credit risk management. arXiv preprint arXiv:2103.00949.
    Moscato, V., Picariello, A., & Sperlí, G. (2021). A benchmark of machine learning approaches for credit score prediction. Expert Systems with Applications, 165, 113986. https://doi.org/https://doi.org/10.1016/j.eswa.2020.113986
    Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E., Ruggieri, S., Turini, F., Papadopoulos, S., & Krasanakis, E. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3), e1356. https://doi.org/https://doi.org/10.1002/widm.1356
    Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should i trust you?" Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,
    Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI Conference on Artificial Intelligence,
    Shapely, L. (1953). A value for n-person games. Contributions to the theory of games. In (Vol. 2, pp. 307-318): Princeton University Press Princeton.
    Surkov, A., Srinivas, V., & Gregorie, J. (2022). Unleashing the power of machine learning models in banking through explainable artificial intelligence (XAI). Deloitte Insights, available at: www2. deloitte. com/us/en/insights/industry/financial-services/explainable-ai-in-banking. html.
    Telford, T. (2019). Apple Card algorithm sparks gender bias allegations against Goldman Sachs. Washington Post, 11.
    Van Lent, M., Fisher, W., & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. Proceedings of the national conference on artificial intelligence,
    Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech., 31, 841.
    Wijnands, M. (2021). Explaining black box decision-making: Adopting explainable artificial intelligence in credit risk prediction for P2P lending University of Twente].

    無法下載圖示 校內:2028-08-17公開
    校外:2028-08-17公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE